/r/AZURE

Photograph via snooOG

Join us in discord here: https://aka.ms/azurediscord.

 

The subreddit for all info about Microsoft Azure-related news, help, info, tips, and tricks.

 


 

Official Discord: https://discord.gg/cMxFErsEDB

 


Stuck? Looking for Azure answers or support? Reach out to @AzureSupport on Twitter.  


Quick Links


Spam

If your post is caught by the spam filter, just send us a message and we'll approve it as soon as possible (as long as it's relevant and it's not spam).


/r/AZURE

142,138 Subscribers

1

New sample: Serverless ChatGPT with RAG using LangChain.js, TypeScript and Azure

Hey folks!
We just published a new sample project aiming at showing the simplest way to build (yet another) ChatGPT-like app with Retrieval Augmented Generation, using TypeScript and LangChain.js:

https://github.com/Azure-Samples/serverless-chat-langchainjs/

We made it so that it can run fully locally using Mistral 7B and Ollama, without the need to deploy anything. But of course, you can also deploy it Azure using OpenAI models in 1 command :)
Feel free to ask any questions and give your feedback, I'll do my best to answer!

Note: I've been a long time lurker here, but not so much a contributor. I hope this kind of posts are ok, otherwise please let me know!

0 Comments
2024/04/09
15:27 UTC

1

XDR Defender for Cloud GDAP

Wondering if anyone has clear steps on how to help with the follow.

GDAP is used to access customer portals such as defender but with the new defender for cloud alerts being in XDR users are unable to actually see the alerts.

It mentions new RBAC for the right permissions which has been deployed in the target tenant (a permission role has been created) but as we are using GDAP it’s not clear to me where to go to ensure users accessing via GDAP can now see the Defender for Cloud alerts in XDR.

1 Comment
2024/04/09
15:16 UTC

1

Key Vault permissions for UAMI to get SSL certificate for App Service

Hello.

I have a Bicep template to deploy an App Service, which it does nicely, but I'm failing to get an SSL certificate in there for my custom domain. The certificate is stored in a Key Vault in a different Resource Group (same subscription) and I have assigned a UAMI, which is also in the other resource group, to the App Service and granted Key Vault Certificate User permissions to the UAMI in the Key Vault.

Unfortunately, I get an error saying I need to grant necessary permissions for the service to perform the request operation. I've even tried making it a Key Vault Administrator.

The Key Vault is configured to use RBAC - do I need to change it to use policies to make this work?

Strangely, I also have an Application Gateway (for WAF in front of the App Service) configured with the same UAMI to get the same certificate from the same Key Vault and it works without error.

What am I missing here?

0 Comments
2024/04/09
15:13 UTC

1

Limiting Access to Blobs Using ACL - Problem with Access Keys

I'm looking to share data around an organisation using ADLS2, and have been planning on using ACL to deal with security. It's straight-forward enough to set up and mask the groups and works perfectly when using Entra authentication. The problem is that I always have the option to auth via access key, giving effectively superuser permissions to all containers within a given storage account. I thought this may just be me as the "owner" of the storage account, but it looks like other people can also auth to my containers using access keys.

I suspect this is some setting my org is using, but before I start navigating tickets - is there a way I can deal with this myself? These containers will be linked to various other services so I can't turn off access keys entirely, but it would be nice if I could pick and choose who has access to what.

Some redundant screenshots just to show exactly what the issue is: https://imgur.com/a/Tew9Jb1

0 Comments
2024/04/09
14:51 UTC

1

WVD Outbound iNet via LB - 403 Errors

Good Morning,

I've exhausted a ton of time trying to resolve an issue I'm having with outbound internet using a LB for NAT. I have a handful of websites that when accessed from a VM behind said load balancer receives a 403 error. Removing a VM from behind the LB and assigning it a different public IP results in the websites working without issue. I've tried creating inbound NAT rules for 80 ad 443 as well as attempting load balancing rules for the same ports with no avail. Does anyone have any guidance on how I can resolve this issue that's been driving me crazy? Any help would be greatly appreciated!

2 Comments
2024/04/09
14:47 UTC

1

Deploying Your Website with Terraform and GitHub Actions CI/CD Pipeline

Hey folks,

Just released a new blogging page on some tutorials with Terraform. The basis is simply deploying a website through Azure App Service with Terraform with a sprinkle of GitHub Actions threw in there too!

Please feel free to have a read here and leave any suggestions for further improvements, or anything you’d like me to cover in the future!

https://connorokane.blog/blog-post-website

I understand this is a more simple project idea but as it’s my first post I wanted to start small and work my way up to more interesting blog posts.

Many Thanks!

0 Comments
2024/04/09
14:25 UTC

1

Can a Logic app be triggered by the receipt of a `multipart/form-data` post?

I'm working with Jotform. They "support" webhooks, but supply data as multipart/form-data, rather than application/json.

Is there a way to configure the HTTP request step to capture and parse multipart/form-data?

2 Comments
2024/04/09
14:09 UTC

1

What are some cost efficient ways of provisioning development environments after migrating to the Azure cloud?

We are a department of 300 people and 30 teams, and we're in the process of migrating our development environments to the Azure cloud. Currently, we have almost 200 development servers that are claimed and used on a need basis. However, we're finding that maintaining all these environments in the cloud with necessary tools installed is proving to be expensive.

I'm looking for advice and best practices on how to make this transition more efficient and cost-effective. Specifically, we have numerous tools that need to be installed and configured before developers can start working on their user stories.

Any suggestions on optimizing our cloud setup, managing development environments, or streamlining tool installation and configuration processes would be greatly appreciated.

3 Comments
2024/04/09
13:59 UTC

1

Routing East/West Traffic via vWan/vHub through an NVA in Anoter VNET

Hello Folks,

I'm trying to determine if this is supported at all as try as I may I cannot get it to work.

Long story short I am trying to get traffic from one VNET (VNET A) to route through a vHub and go to a VNET (VNET C) that has an NVA (Cisco FTD Firewall) and have that said Firewall send traffic to the destination VNET (VNET B).

Basically any time traffic needs to go from VNET A to VNET B, it would first need to go to the NVA and hairpin on the backend interface to go to the NVA. This is possible without a vHub but I'm finding it impossible with it. I do see the traffic hit the back side of the NVA but I never see it reach VNET B once it leaves the device.

Is this setup not support in vHub at the moment? We're trying to filter East/West traffic and the only way I've done this in the past is by using peering only, not vHub.

With peering you could even doing microsegmentation/subnet to subnet filtering by pointing to another VNET's NVA to come back and filter. I'm guessing this isn't supported in the vHub world and to do this we'd have to fall back to the peering design.

The other requirement would be subnet to subnet filtering too, so even if a host was in VNET A Subnet 1 trying to get to VNET A Subnet 2, it'd have to go across to VNET C first and route back to be filtered by the firewall. So I'm not sure any of this is supported today via vHub and curious what others thoughts are.

This is a terrible picture I drew up to help explain what we're trying to do in order to filter this via an NVA: https://imgur.com/a/TGRdIWT

EDIT: Figured I'd circle back here, finding at least subtle documentation stating this is not yet supported:

https://learn.microsoft.com/en-us/azure/virtual-wan/scenario-route-through-nva

"Virtual WAN doesn't support a scenario where VNets 5,6 connect to virtual hub and communicate via VNet 2 NVA IP; therefore the need to connect VNets 5,6 to VNet2 and similarly VNet 7,8 to VNet 4."

So I guess that's my answer....

1 Comment
2024/04/09
13:39 UTC

1

Question - Functions not showing in portal/running after update to .net8

I have a function app which until recently was .net core 6, I have been making some changes to the process and as part of that I have updated the project to .net 8.

The function app is built and deployed via azure devops pipelines which have been updated accordingly.

The issue that I have is that once deployed the functions are not showing up in azure on the overview screen and do not appear to be running though they do occasionally show up and have randomly run once or twice - I don't know it this has been a coincidence but it seems to happen after I have toggled and saved some config settings (could be any of them, I think its the save that's actually making the difference)

I have been looking at this for hours now and am just stuck going in circles, there is nothing in the logs indicating a problem, the release is successful but the functions just aren't there.

Any help would be greatly appreciated.

0 Comments
2024/04/09
13:32 UTC

1

Have you purchased OpenAI PTUs? How much did it cost?

My company's looking to build some internal RAG and other OpenAI-based apps, and they'd like to use PTUs in order to have more predictable performance and latency. We can't just get PTUs - we'd have to start with our sales rep.

I've reached out for it but I'd like to get an idea of costs so the company folks can start budgeting - can anyone speak to the PTU cost breakdown if they went down this path? What region, which model, how many TPMs did you provision, how much did you have to upfront, etc.?

0 Comments
2024/04/09
13:30 UTC

1

Azure Policy get VNET Prefix dynamicaly

Hello r/AZURE,

I am currently building azure policies to enforce company guidelines that all the traffic needs to go through a NVA in a hub and spoke setup. I found and testet the following policy, which enables us to deploy routes based on parameters which you define in the policy. AzPolicyAdvertizer - Deploy route to route tables

This works fine for routes who do not change in all the spokes. (Default route, Hub VNET route) However we also send traffic changing the subnet through the NVA. Therefore on every route table there are 2 additional routes

  • Destination: VNET address space , Next Hop: NVA, Next Hop IP: NVA-IP (outside subnet is filtered)
  • Destination: Subnet, Next Hop: Virtual network (inside subnet directly)

Now for these two to get working, i need to get the VNET address space and the subnet dynamically for every vnet/subnet/route table.

I found and tried the aliases but don't know if it is actually possible to get these values dynamically and map them to the route tables..

  • Microsoft.Network/virtualNetworks/addressSpace
  • Microsoft.Network/virtualNetworks/subnets[*]

Does this actually work, any ideas or inputs?

0 Comments
2024/04/09
13:23 UTC

1

Azure and Docker

Hi everyone!

I hope everyone is alright!

I'm trying to create a DockerFile where i want to insert a solution composed of multiple .csproj in .NET 6.0. This solution has several dependencies, and some of these are in the form of NuGet from private repositories. I'm using Docker Desktop with the builder set to desktop-windows. When I run the command curl -u "any:%PAT2DOCK%" "https://pkgs.dev.azure .com/xxxxxxx" it works, which leads me to believe that the PAT is correctly set. In addition, in the docker file, i also tested the connectivity to a website and it's working which makes me believe that networking isn't also the problem. However, i keep not being able to restore the project, since i always get the error " error NU1301: Unable to load the service index for source". Has someone also experienced this? Any tips on how i can solve it?

0 Comments
2024/04/09
12:19 UTC

1

Azure OpenAI: is it easy to cap api usage by different users/projects?

I'm hoping that someone with some experience can confirm or correct my understanding? My organisation has dabbled with the OpenAI service previously but is a bit concerned about controls for appropriate usage and cost. If I understand right we could allocate budget to specific to certain projects (say project x gets an initial £200 to investigate a small use case), create an api key (and endpoint?) specifically for that project with usage capped (at 200 for now.) That project team would then control who has access and make sure they efficiently spend the budget until they need to request more? Separate projects or users could be given separate api keys each with separately managed caps on usage? Does that sound right?

2 Comments
2024/04/09
11:44 UTC

11

Your experience with the Azure Cloud

Dear colleagues,

We recently transitioned to the Azure cloud. We are a company with 200 employees, and 365 is now in the cloud. Additionally, we have a separate Revit server for drafters who use Revit and AutoCAD. We have a fiber optic connection and using Nerdio for Azure virtual desktop management.

Users can connect to the cloud environment via the Windows app from the Microsoft Store. We have set up 2 multi-user pools, one for a regular desktop and one for a graphics pool. However, the overall user experience is poor. There have been numerous complaints that working locally is much faster than working in the cloud. I understand that working in the cloud involves data being sent back and forth over the internet. But the startup time is long, and loading programs is a hassle. We have enabled caching in Outlook, which has improved performance significantly. However, we continue to experience issues with programs like Revit and AutoCAD.

We have also conducted speed tests, and there are no issues with the connection.

I'm genuinely curious if other companies are experiencing similar issues, or if there's something we can do to improve speed.

If you need more information, please let me know. Thank you in advance for your responses.

27 Comments
2024/04/09
11:02 UTC

1

[Teach Tuesday] Share any resources that you've used to improve your knowledge in Azure in this thread!

All content in this thread must be free and accessible to anyone. No links to paid content, services, or consulting groups. No affiliate links, no sponsored content, etc... you get the idea.

Found something useful? Share it below!

0 Comments
2024/04/09
11:00 UTC

7

Unconventional ways to cut costs

I've gone through pretty much every thing like automatic shutdowns, hybrid benefit, reservations and others alike. I managed to find some success in using logic apps to deploy bastion as needed (found this method on this subreddit), and I figured out that keeping unused data disks as snapshots cuts their cost by a good %.

Did any of you employ similar tactics? I'm keen to find out about anything you've introduced to your environment that reduced your cost, even if the savings themselves are tiny.

16 Comments
2024/04/09
10:59 UTC

1

Azure Files - Entra Kerberos Auth Issues

Hi all,

We're trying to implement Azure Files using the Entra Kerberos authentication for hybrid identities method and running into an issue.

It looks like the issue main issue is that the authentication doesn't seem to work unless the client has line of sight to our domain controllers, which is really odd because the whole point of this authentication method is to not need this.

https://preview.redd.it/13t77p6dpftc1.jpg?width=779&format=pjpg&auto=webp&s=926ae610d8dd24156d08f186eda54153d58f9871

Basically when we map the drive using the PS script they provide we receive the error:

"New-PSDrive : The system cannot contact a domain controller to service the authentication request. Please try again later"

 

This is when the client doesn't have access to the DC, so the error is correct, but I don't think it should be trying to contact the DC in the first place. (It does actually work correctly when on the office LAN and has line of sight to the DCs)

A few of us have been through the Pre-reqs over and over and can't see anything we've missed (PreReqs List). We've also had a case open with MS support who don't seem to be getting anywhere fast.

Has anyone else seen this before?

Thanks

0 Comments
2024/04/09
10:53 UTC

1

Can I create and manage EA Subscriptions via PowerShell Graph or PowerShell Az module with an App ID and secret cred?

How are you guys automating new Subscriptions for developers in you Azure Env?

I am looking at automating new Subscriptions (the account type not the alert/notification type) for developers which come through our ITSM tool (ServiceNow). They fill out a form, it goes for approval and then we set up the Entra groups, then the Sub and add the Entra groups as roles (owner/contributor etc). All manual right now

So what I would like to do it take everything on that form an spin up the groups/Sub/roles etc. I am not a cloud admin in our environment but I assumed this could be done via Graph so I requested an App with a client secret with the permissions to create and manage subs in our test tenant. I got that and am able to login to our test tenant but now that I am looking at it I do not see any Graph cmdlets (PowerShell) that can create Subs.

I then looked at the Az modules and found this article on Learn that suggests using New-AzSubscriptionAlias but now I dont see a way to use my App and secret cred to logon using Connect-AzAccount/Login-AzAccount etc.

I am a bit stumped. Surely there is an easy enough way to automate this? Any help appreciated.

Update: I figured out I needed the -ServicePrincipal param in order to login with my App ID and secret...

Connect-AzAccount -ServicePrincipal -TenantId $tennantId -Credential $cred

I still am uncertain what other steps I might need to take

0 Comments
2024/04/09
10:34 UTC

1

How do I stream data from On premise Oracle database to ADLS Gen2?

I have this requirement where I need to stream XML BLOBs from an on-premise Oracle database into ADLS Gen2 BLOB Container. I was able to get a self hosted VM, install Integration Run time & JRE, configure the IR using the keys, and tested the connectivity between the on-premise Oracle database and ADLS Gen2 using Azure Data Factory "Copy" Activity.

I have 2 questions -

Can Oracle triggers (like AFTER INSERT) initiate an external event? Does Azure have any service that listens to this table and opens the stream active until the data flushes out? Could anyone please help me how to achieve this?

Source is an Oracle table with 5 columns

SURROGATE_ID NATURAL_KEY1 NATURAL_KEY2 XML_COL LAST_UPDATE_TIMESTAMP 1 AAA XYZ xml BLOB 2024-04-08 12:03:34 2 AAB XYX xml BLOB 2024-04-08 12:03:39 3 AAC XYZ xml BLOB 2024-04-08 12:05:27 I need the XML_COL to be saved into the ADLS Gen 2 Blob container as separate XML files like below:

1_AAA_XYZ_20240408120334.xml 2_AAB_XYX_20240408120339.xml 3_AAC_XYZ_20240408120527.xml

TIA

0 Comments
2024/04/09
10:17 UTC

2

Cannot kill my replica quick enough in Azure Container Apps - paying a lot of extra, need help

According to documentation, scaling down is based on KEDA scaler's polling interval and cool down period when a replica is inactive.

You can create a custom Container Apps scaling rule based on any ScaledObject-based KEDA scaler with these defaults:

Defaults - Seconds

Polling interval - 30
Cool down period - 300

I have a containerized .NET console app (Processor) that works with MassTransit Saga and uses Azure Service Bus.
Here's the flow:

  • User triggers a new request in the UI.
  • API responds to that with 202, queues a new message.
  • Processor picks the message up, validates against an external dependency, rejects if it's invalid, reply is sent with SignalR.
  • If it's valid, 16 short-lived subtasks are "scattered" to another queue.
  • Processor then again listens to the subtask queue. Subtasks are designed to work independently from each other so we are trying to achieve parallelism here, hoping to spawn multiple replicas to quickly finish all the subtasks so we can "gather" them into a result.

This sounded good on paper, but we didn't really account for Azure Container Apps' KEDA defaults and now KEDA takes a long time to kill our short-lived subtask replicas, costing us huge money.

A subtask usually takes sub 10 seconds, but it may be up for several minutes.
Here's a log example:

2024-04-09T07:42:34.286771665Z [07:42:34 INF] Received HTTP response headers after 63.2646ms - 200

2024-04-09T07:42:34.286791349Z [07:42:34 INF] End processing HTTP request after 63.6467ms - 200
2024-04-09T07:42:37.291720980Z [07:42:37 INF] Start processing HTTP request POST https://someexternaldependency.com
2024-04-09T07:42:37.291765327Z [07:42:37 INF] Sending HTTP request POST https://someexternaldependency.com
2024-04-09T07:42:37.359334168Z [07:42:37 INF] Received HTTP response headers after 67.3755ms - 200
2024-04-09T07:42:37.359365408Z [07:42:37 INF] End processing HTTP request after 67.915ms - 200
2024-04-09T07:42:37.361759919Z [07:42:37 INF] Start processing HTTP request POST https://someexternaldependency.com
2024-04-09T07:42:37.361780021Z [07:42:37 INF] Sending HTTP request POST https://someexternaldependency.com
2024-04-09T07:42:37.429006497Z [07:42:37 INF] Received HTTP response headers after 67.028ms - 200
2024-04-09T07:42:37.429041819Z [07:42:37 INF] End processing HTTP request after 67.3068ms - 200
2024-04-09T07:42:37.536283448Z [07:42:37 INF] Start processing HTTP request GET https://someexternaldependency.com
2024-04-09T07:42:37.536428506Z [07:42:37 INF] Sending HTTP request GET https://someexternaldependency.com
2024-04-09T07:42:37.603096224Z [07:42:37 INF] Received HTTP response headers after 66.6407ms - 200
2024-04-09T07:42:37.603154115Z [07:42:37 INF] End processing HTTP request after 66.9797ms - 200
2024-04-09T07:42:37.603747333Z [07:42:37 INF] Start processing HTTP request GET https://someexternaldependency.com
2024-04-09T07:42:37.603812686Z [07:42:37 INF] Sending HTTP request GET https://someexternaldependency.com
2024-04-09T07:42:37.668116057Z [07:42:37 INF] Received HTTP response headers after 63.9475ms - 200
2024-04-09T07:42:37.668138120Z [07:42:37 INF] End processing HTTP request after 64.2936ms - 200
2024-04-09T07:42:38.664218358Z [07:42:38 INF] [367f098e-6f05-4ddf-98e4-6f3bbe1346ae] SomeSubtask processing ended with Success
2024-04-09T07:42:38.745957891Z [07:42:38 INF] SignalRConsumer<SomeSubtask>: 367f098e-6f05-4ddf-98e4-6f3bbe1346ae
2024-04-09T07:43:51.22132  No logs since last 60 seconds
2024-04-09T07:44:51.55404  No logs since last 60 seconds
2024-04-09T07:45:51.88254  No logs since last 60 seconds
2024-04-09T07:46:52.30961  No logs since last 60 seconds
2024-04-09T07:47:24.407700037Z [07:47:24 INF] Application is shutting down...
2024-04-09T07:47:24.618864866Z [07:47:24 INF] Bus stopped: sb://someservicebus.servicebus.windows.net/

One of my replicas ran for around 3 seconds, but only stopped after 5 minutes.

So, naturally we checked if this can be helped by adjusting the cooldown period and polling interval, found two issues:

This just feels hopeless at this point, we cannot adjust these settings. So we'll have to pay 5 minutes of execution time to Microsoft at minimum.

At the moment we have a 1 messageCount scale rules on all of our queues with a maximum replica limit of 10 for testing, but since we cannot kill replicas which cost money, we'll have to adjust.

Questions:

  • Is this scatter & gather pattern even viable for Azure Container Apps if you have short-lived subtasks like we do? If so, how should we plan this differently to achieve what we want? We want to horizontally scale for our subtasks that's more or less it.
  • If we can't stick to horizontal scaling, should we go for a high CPU count revision and achieve parallelism inside the app?
0 Comments
2024/04/09
09:03 UTC

1

What's going on with the flexible mysql database ?

Hello,

For several days now, I've been experiencing completely unexplained cpu load peaks on my flexible mysql server (B1MS).

No metrics, no maintenance, no increase in requests or connections are to blame at this time. As proof, I've disconnected my applications, and access server is not public. I've followed the documentation in case of problems, which isn't very helpful.

For the test, I recreated a new instance, without doing anything other than creating the server. The result: CPU load peaks of 30% at regular intervals. As a result, all burstable credits are gradually disappearing.

Is this microsoft's strategy to force us to take a server on a plan that costs more?

Has anyone ever experienced this problem?

Thanks

6 Comments
2024/04/09
08:09 UTC

1

Azure disk iops

Hi everyone,

since a month we have this warning on most of our vm : (example)
The desired performance might not be reached due to the maximum virtual machine disk performance cap. The current virtual machine size supports up to 89 MBps. The total for disks attached to 'XXFILES' is 500 MBps

You now also have this warning when provisionning a vm in the portal.

We had client that told us they feel it slower than before, so was this limitation always been there but without this warning or is this new and screw us up ? In my example it's a file server, to reach 500MBps we need to scale it at a D16s v5, 16 cpu 64 gb ram for a file server. Really ? And for a serie Ev5 it would be 16 cpu 128 gb ram.

Seems crazy

7 Comments
2024/04/09
07:24 UTC

4

Getting up to speed on AKS

What's the opinion here on getting up to speed on AKS. I'm not quite sure if I should understand Docker more to begin with (but a lot of the courses seem in depth and developer orientated) and it's the same if I look at native Kubernetes as well.

I'd like to understand what is expected from a DevOps guy with regards AKS, it seems Developers have a good in depth knowledge of it but I think I just need more of a good understanding of how to take Docker images and deploy them into AKS, does that sound right ?

3 Comments
2024/04/09
07:04 UTC

2

Throttling issues

I am trying to read files from blob storage and process them via logic apps. However Inwant to restrict the number of files being processed at a time. The Degree of parallelism is enabled for a for-each loop in my logic apps. However that is not respected when the logic app is running and it's processing all the files at once.

Example: degree of parallelism is 8..and i have 10 files in the blob..all the 10 files are being processed parallely instead of 8.

What am i missing? How else can I throttle processing too many files from Blob at once?

Note: my blob is in-app connector due to vnet requirements.

0 Comments
2024/04/09
06:36 UTC

0

Troubleshooting web app + storage account premium azure files

Hi all,

Im pretty new to web apps and integration of storage accounts. I see a lot of clientOtherErrors and ClientThrottlingerror.

Im no Kusto expert so how can i narrow it down to the cause of this?

Thanks.

2 Comments
2024/04/09
06:12 UTC

0

Best Practices for Azure Migration

Hey everyone!

I've been deep-diving into the world of cloud migration lately, particularly with Azure, and I stumbled upon some insights that I found to be incredibly valuable. Migrating to the cloud can be a daunting process, with a myriad of considerations like cost management, security, and minimizing downtime.

I recently came across a blog that does a fantastic job of breaking down the best practices for Azure Cloud Migration.

I thought this could be a great starting point for a discussion here. What have your experiences been with cloud migration? Have you encountered any of these recommended practices in your own journey? Are there any challenges you've faced that weren't covered, or any tips you'd want to add from your personal experience?

Here is the blog link which I referred to: https://www.damcogroup.com/blogs/best-practices-for-frictionless-azure-cloud-migration

Would love to hear your thoughts and any additional insights you might have!

2 Comments
2024/04/09
06:03 UTC

0

Are the Application Settings for a Function App available in Resource Graph Explorer

We're trying to determine what Azure Storage Accounts we have are still in use. I've written a PS script that will pull down the app settings and search them for the name of the Storage Account, but it would be a lot easier for the rest of my team if we could find this information (and more like it) in the Graph Explorer. But I just can't find it.

Specifically, I'm looking for the AzureWebJobsStorage name.

TIA

0 Comments
2024/04/09
06:00 UTC

1

Web Apps - certain settings show "The request is blocked"

I'm planning to make a few updates to some of my Web Apps tonight and later this week. However, when I wanted to double check the configuration settings, I get this response:

\"The Request is blocked.\" You might struggle to see it as it is black text in a dark background, but its there

Upon further investigation, I found these settings to produce the same effect:

  • Configuration
  • Change App Service Plan
  • Deployment Center
  • Deployment Slots
  • Log stream

I'm also seeing this effect in other Web Apps. While the first two Apps I've check have only Private Access, a third one that has public access also has this issue.

Is anyone else seeing this? Any idea what might be going on?

3 Comments
2024/04/09
02:12 UTC

Back To Top