/r/AZURE
Join us in discord here: https://aka.ms/azurediscord.
The subreddit for all info about Microsoft Azure-related news, help, info, tips, and tricks.
Official Discord: https://discord.gg/cMxFErsEDB
Stuck? Looking for Azure answers or support? Reach out to @AzureSupport on Twitter.
Quick Links
Spam
If your post is caught by the spam filter, just send us a message and we'll approve it as soon as possible (as long as it's relevant and it's not spam).
/r/AZURE
Obviously roles like Global Admin, User Admin, Teams Admin, Exchange Admin, etc. will all be PIM enabled, that makes sense.
But what about all the other roles? Inisights Admin, Microsoft Entra Joined Device Local Admin, Search Admin, etc. There's a ton of them. Should they all be PIM enabled too? If not, then what's the process with using these roles?
How fast do they respond to Severity C cases? In my email, it says that i will receive an update every 24 hours, but it's been 2 business days and I haven't heard anything. And is there a good possibility that they will provide me with a refund or a partial refund?
I was doing some personal learning on my personal tenant for work. So I could explore the Databricks admin privileges and go through an end to end project without asking for permissions. After I was done that night, I left the Azure Databricks services on and that racked up a ~$400.
I sent an email to Azure describing my mistake on Jan 30, they replied 8 hours later describing an action plan to validate that services are fully stopped (i've already taken the steps to delete all resources and pause my billing) and he said he can create a request with their finance team to validate the possibility of providing me with a refund.
Pregunta estaba por hacer el examen de azure fundamentals az-900, cuando estaba en el proceso de check-in ya no pude hacer el examen porque me dio un error alguien sabe como reagendarlo estaba haciendo el proceso por personvue, a alguien mas le ha pasado? alguien sabe que hacer en estos casos
Does anyone have experience with restricting communications to the ARM API from going through the internet, but rather, through a private connection somehow (IP wise)?
I discovered that there is a Resource Management Private Link service (documentation linked below), which is described as solving this particular problem. However, documentation is very scarce and only shows how to provision the resource, but not really how to configure it for real world scenarios.
If I am misunderstanding something here, please let me know. From my understanding, communications to the ARM API require outbound internet access, despite traffic being preferably routed over the Azure backbone by default.
Concrete example:
From my understanding of the attached documentation, this should be possible using said service, but I don't understand yet how this is to be configured exactly. If I follow the instructions I end up with a Resource Management Private Link resource, connected by Private Link to a dedicated Private Endpoint, effectively giving it a private IP. Now what? How can I make sure that some API request executed on a this VM to ARM travels through this private connection, rather than to the public API endpoint? I can imagine it can be done through some DNS trickery, but is there a more direct and "by-design" way?
Looking forward to hearing your input, as I found literally zero information about this online.
Google has built-in solutions for this, and I'm a bit baffled by the fact that sovereign cloud and configuring cloud infrastructure in Azure to be "fully private" (using only PEs, VNets, Private Links etc.) is a common practice, but the management interface of said infrastructure still has to traverse the internet. For instance, limiting Azure portal access to a dedicated "intranet" IP range, or some bastion solution, is a common practice. What about CI/CD (like Azure DevOps Server, on premises) with ARM API? Help me understand!
Trying to create a python script that accesses azure key vault and retrieve a stored key. This exercise is to get kids used to key vault, create a key and try to access the key using a python script that runs locally. I am using a student account and cannot access microsoft entra id to create a service principle which seems like the way most people access the key vault locally.
I want to know if there is any way I can just use python to access key vault and get a key without having to create a service priciple.
Thanks!
Edit: I am a student myself but I am trying to create a do-it-yourself activity where I include instructions to follow so the students can do it on their own pace. I am on the free student plan for now and I read everywhere that I need global admin permission to use service principle. Is there a work around so I can do it without using the service principle and within the scope of my permission?
Quick question regarding the SC-200 Exam. I have been studying every day for a few hours the last two weeks. To practice, I use measure up exams and the MS learn assessments, get around 90's in both now. But I find myself just learning the questions and answers rather than the subject matter itself or at least I think. If I know all the answers to the measure up practice exams, do I find myself in a good place for my exam on Thursday then? Do you have any suggestions on how to go about studying from here on out for the next 3 days with those scores? Thanks!
Hey folks, question about moving forward with some client-facing apps within power platform. Seemed like B2C was the way to go as it's more mature (I'm sure some will laugh at that word choice) but I do see Entra External ID being mentioned more, and it's in preview for Power Pages at least. What's the consensus on it? Are things expected to change much with it? As much B2C hate as I see around, if it's getting support for the next 5 years and not getting majorly updated.... possibly more dependable for a little while? Thanks !
I’m developing a SaaS app that has access to various storage accounts of customers. There is currently ONE service principle and customers have to give it access to their storage accounts. However this creates the following problem: Customer A can (in theory) access Customer B’s storage account through the service principle in our app (assuming they know the name of the containers and objects).
In AWS the way this is handled is by using externalID - essentially a shared secret between the SaaS platform and the customer.
Here are the AWS docs: https://docs.aws.amazon.com/IAM/latest/UserGuide/confused-deputy.html
What is the equivalent for Azure? I’m very new to it but have some years of experience with AWS.
I found things about SAS tokens - but I think that’s more like signed URLs.
The clients that raised this say that we should just have one managed identity in our SaaS app per customer. But how do we keep this maintainable?
Hi,
We have recently gone from virtual machine scale sets to container apps for many of our services. But we have noticed a lot of strange network issues. We get a worrying amount of timeouts and connection refused when containers talk with eachother, and also when azure calls the healthchecks. And this is in a staging environment, with practically no traffic.
When container app A talks with container app B, and gets a timeout or connection refused, only container app A logs an error. B seems completely oblivious to the fact that A tried to talk to it and failed.
CPU and RAM usage is below what we have allocated for them, and the same is true for the database. The container apps run a mixture of different stuff, like Java, Node, Python. So it seems like it's a network or architechture issue, rather than an issue with our code.
How can we troubleshoot this? No one in our team is a networking expert.
I am trying to use Azure free trial, in the phone validation part, regardless of whether I enter the phone number with country code or not, press text or call, the results are as follows:
Unfortunately I was unable to create a support ticker to the Azure support because of another error which I am facing from time to time on azure portal:
(though this is an admin account)
the problem is that I am the administrator and no other have permission to turn this on. I can log in but as soon as I try to do something I get the same message, I have looked up things or tried on my mobile phone but it still error, can someone help me, please?
Hello.
Have an issue that just started creeping up on us lately. Some of our AVD VMs that have users logged in are leaving their sessions in a disconnected state after the users leave. Its not ending the session. We have GPOs in place to end disconnected sessions after 5 minutes so even if all the users are disconnecting and not logging out the sessions should not be stuck there. If you manually clear them things are fine and they go away. The issue doesnt happen on one particular VM, it can any of them in the pool of 17 at various times.
Anyone else experienced something like this?
hi guys
Azure DevOps: Can I run a CosmosDB query from a pipeline?
I was seeing this is possible for SQL but I don't see documentation for CosmosDB.
Basically my idea is to run this query, I will set the TTL =1 so items start deleting
SELECT VALUE COUNT(p) FROM container-name AS p
and this will print the amount of items in a container, so I can count items and when equal = 0, I set TTL for that container back to default.
any idea, or that not possible and I can only set TTL using Azure CLI and the query needs to be run manually?
** Sql
https://www.blueboxes.co.uk/executing-azure-sql-database-queries-from-azure-devops-pipelines
thanks
Bastion SSH with Entra credentials in portal went GA in November 2024.
Any news for when RDP might be supported in a similar fashion?
Is there a way to get access to the preview for this feature?
I've connected my account to github successfully, however I do not see my organization's repositories listed in the Repository
pick list. I am an All-repository admin
and a CI/CD admin
for the organization. Do I need additional roles? If so, which?
Do I need to need to connect using a "fine grained" PAT (personal access token) instead?
If you are working at a company and you host several servers in Azure and the service owners are using all kinds of applications such as vlc, fiddler or so, how do you manage these applications?
I know about Azure update manager but this cannot manage 3rd party apps.
So, how do you perform patch management when a new server is hosted in your environment in Azure?
Appreciate responses to this
Hi
I'm very familiar with Windows Server and traditional RDS environments but new to AVD.
Windows Search (searchhost.exe) always used to be disabled by default when the RDS role was installed but I have noticed that Windows 11 multi-session images from Azure the service is not disabled by default.
We'll be running Outlook on the AVD session hosts, Windows Search used to HAMMER an RDS if the Windows Search service was not disabled, partly due to it indexing emails in Outlook. Yes i know there are options to not index Outlook.
So I'm wondering what others do within AVD and why, do you just leave Windows Search running if using Outlook on the AVD session hosts? Has the service improved and now does not hammer resources when users login and open Outlook?
thanks.
Storage module of the Azure Master Class v3 is up.
00:00 - Introduction
00:35 - Types of storage
06:14 - Azure Storage 101
12:17 - Storage account basics
16:29 - Storage durability
17:42 - Resiliency options
23:12 - Storage account failover
25:16 - APIs and other features
29:35 - Object level replication
35:14 - Storage account services
35:39 - Blob offerings
45:08 - Files
47:58 - Table
48:24 - Queue
51:26 - Money
54:18 - Tiering
1:01:58 - Provisioned based billing services
1:04:32 - Provisioned v2 standard
1:07:36 - Data Lake features
1:15:46 - Hosting a website
1:18:46 - Access control options
1:19:01 - Account keys
1:22:17 - Blob anonymous access
1:23:24 - Entra ID integrated data plane RBAC
1:26:33 - Shared Access Signatures
1:32:14 - Don't worry about key over TLS
1:34:01 - Encryption
1:36:36 - Encryption scopes
1:39:16 - Network protection
1:44:48 - Lifecycle management
1:48:45 - Azure Storage Actions
1:52:19 - Native protection constructs
1:52:53 - Blob versioning
1:55:05 - Change feed
1:55:57 - Soft delete
1:56:42 - Point-in-time restore
1:57:48 - Azure File Sync
2:02:13 - Azure Elastic SAN
2:06:40 - Azure NetApp Files
2:12:07 - Managed Disks
2:21:37 - VM Storage
2:27:55 - Handling big volumes
2:28:55 - Storage tools
2:31:15 - AzCopy
2:32:38 - Azure Storage Mover
2:34:03 - Import and Export
2:35:59 - Data Governance
2:38:13 - Close
Hi everybody,
I have been asked to create an Azure alert for when VMs become unavailable in a specific resource group. I created a Resource Health alert using Terraform, and it is working. However, in the worst cases, it takes up to 15/17 minutes before an alert is triggered and the action group is activated.
What is the default interval time for Resource Health alerts? Is there a way to minimize that delay?
Please let me know! ^_^
I am looking for somebody to study AZ-900 together.
Hey folks,
I need a hand with an Azure VWAN. We already have a few S2S tunnels that work fine, but they weren’t set up by me, so I’m still learning. I created a new tunnel, it appears “up” on both sides, but there’s zero traffic, that's what customer confirms (I couldn't find a way to check if any traffic hits specific tunnel).
When I try reaching their endpoint from my private network, it fails. I’m pretty sure the traffic isn’t going through the tunnel, but I don’t know how to confirm or troubleshoot. Any tips or ideas on where to look?
Thanks in advance!
Hello, I have a WebJob running in an AppService and want to detect shutdowns due to restarts so I can terminate gracefully. I have tried all the suggestions I could find online, but to no avail. I test the suggestions by changing an env variable to force a restart. My current code looks like this.
Holy #$% why is Reddit code editor so rubbish?
public static async Task Main(string[] args) { IHostBuilder hostBuilder = Host.CreateDefaultBuilder(args) .ConfigureServices((hostBuilder, serviceCollection) => { serviceCollection .AddWebJobsHostedServices(hostBuilder.Configuration) .AddWebJobsHostedOptions(hostBuilder.Configuration); });
IHost host = hostBuilder.Build();
ILogger<Program> logger = host.Services.GetRequiredService<ILogger<Program>>(); logger.LogInformation($"Starting BackgroundServices as {(Environment.Is64BitProcess ? "64 bit" : "32 bit")}");
await host.RunAsync(); }
I currently have a hybrid setup configured, with users being synced from on-prem AD to Entra and 2FA enforced at the cloud level. Currently only Users & Groups are being synced up to Entra and users' are assigned a P1 License.
I am now looking to implement 2FA for all admins when they login to a system via RDP and have read that this is possible with Conditional Access and integrated with on-premesis ADFS. However we are not currently syncing devices, which I read may also be a requirement to extend 2FA to on-prem.
Ultimately the question is - can I implement 2FA on-prem in this way to a single OU (admin OU) for when an admin RDPs to an on-premises server?
Hello,
I'm not sure if it is a firewall issue or a routing issue. I connect with the Azure VPN client and can ping the server. Several other people can as well. I have one user with a generic set up as far as I know but after he gets a green connection in the client he can't ping the resources. We have a VM that he should be able to ping but can't.
I'm pretty new to azure so I'm not sure where to start in troubleshooting.
The windows firewall on the PC that can't ping the azure resoures has been turned off temporarily.
The windows firewall on the azure VM was also turned off temporarily - still couldn't ping from one workstation.
Do I need to add the internal subnet of the PC that can't ping somewhere in azure?
Thanks
Greetings!
I am pretty new to infrastructure as code (bicep) but I find it very fun, but sometimes challenging because I learned most things from using the portal.
Right now it’s a mix between things I manage in portal and some as code. I find network settings to be easier in code. Is there like a rule of thumb on how to use this?
Like, if I want to create a vm it feels to me like there needs to be quite a lot of lines of code and SKUs that I don’t know and thus the portal would be easier. That is until the day I want to clone the VM where deploying a new one through code might have been easier.
How do you guys use bicep? Are there some easy to use/learn resources out there? Or template libraries?
Hi, Is this possible to update the zones of azure appgateway without having to recreate it ? Currently my app gateway is not using zones.
Edit: Idky why this some "expert" here getting pissed if I ask basic question. I am not ranting. I am adding some info. Where in doc does it says about this?
I am trying to create an AKS cluster with just 2 nodes for learning purposes, but I keep getting this error even though i have already upgraded from the free tier to the pay as you go model. I am not sure why it says insufficient quota? Can't they just assign me more quota for my cluster or am i supposed to request it ?
{"code":"InvalidTemplateDeployment","details":[{"code":"ErrCode_InsufficientVCPUQuota","message":"Preflight validation check for resource(s) for container service az-cluster in resource group az-cluster_group failed. Message: Insufficient regional vcpu quota left for location eastus. left regional vcpu quota 0, requested quota 4. Details: "}],"message":"The template deployment 'microsoft.aks-1738566358353' is not valid according to the validation procedure."}
What would be the difference in functionality between a vnet with an address space of 10.10.0.0/22 that has 4 /24 subnets defined, vs just setting the same 4 /24's as unique address spaces?
im trying to set up cluster aware updating but im running in to some intermittent issues.
When using cluster aware updating, i can only connect to the cluster with one of the cluster nodes IP addresses, the cluster name doesn't work. I found out this is because the cluster name resolves to an internal load balancer IP in azure, and there are no load balancer rules set up for all the various WinRM/RPC type stuff that Cluster aware updating relies on.
I tried editing the hosts file of my management machine so that the cluster name would resolve to each of the nodes inside of the cluster, essentially removing the need to make a load balancer rule. And this initially had some positive impact, but has gone back to displaying the exact same behaviour as before. This is just so intermittent that sometimes CAU can connect to the cluster name and start the update process, but then fails and cant contact the cluster etc.
my next step is to add a two load balancer rules that allow these ports as they are all the ones I've identified that were needed for Cluster Aware Updating to even connect to the nodes in the first place:
TCP: 0,53,88,135,137-139,389,445,464,636,1025,1026,3268,5985-5986,24158,49152-65535
UDP: 0,53,88,123,135,137-139,389,464,3343,5985-5986,24158,49152-65535
I believe the way it will work from that point is
At this point I believe it should be working. But I would really appreciate if anyone can think of any reason why this still wouldn't work, or if there is a better way of doing this?