/r/SQLServer

Photograph via snooOG

Microsoft SQL Server Administration and T-SQL Programming including sql tutorials, training, MS SQL Server Certification, SQL Server Database Resources.

Microsoft SQL Server Administration and T-SQL Programming including sql tutorials, training, MS SQL Server Certification, SQL Server Database Resources.

You might also be interested in:

/r/database

/r/sql

/r/Azure

/r/Microsoft

/r/SQLServer

53,618 Subscribers

2

How to monitor progress of initial snapshot application in SQL Server Transactional Replication?

I've set up transactional replication in SQL Server, and I'm at the stage where the Distribution Agent is applying the initial snapshot to the subscriber. This process is taking quite some time due to the large data volume.

Is there a way to monitor the progress of the distribution of the initial snapshot? I know in Replication Monitor I can get the distribution messages like "Bulk copying data into table ... (xx rows)". But different table has different row count. It's still hard to estimated time remaining.

Any insights would be greatly appreciated.

5 Comments
2024/12/02
13:16 UTC

5

TempDB contention on 2:1:128 (sys.sysobjvalues) PAGELATCH_EX

I've got a strange issue where I'm getting tempdb contention on wait_resource 2:1:128 which DBCC PAGE tells me is sys.sysobjvalues. I either get PAGELATCH_EX/SH and CXCONSUMER waits. Every query will have a blocking session id that doesn't correlate to any session in blitzwho or whoisactive, however in the transaction log for tempdb I can find FCheckAndCleanupCachedTempTable with the SPID that blocked the other queries.

I am on SQL Server 2019 which Microsofts advice is not to enable trace flags 1117, 1118. However Microsoft does have a CU for for SQL Server 2016 KB4131193 although I don't go past 1000 active tempdb tables.

I've investigated TempDB caching, and removed all DDLs to tempdb's (only create table is left), I've reduced my highest TempDB consuming queries. I've checked tempdb autogrowth settings and log sizes, I've looked for autogrowth events. Every TempDB is sized the same.

We do use tempdb and TVPs a lot. And all files (tempdb/database) are on a SAN via SCSI. Standard Edition so can't use Memory Optimized TempDB metadata.

I have 12 tempdb files on 24 cores. I increased from 8 when this started happening.

Is there anything else i can look for? Has anyone else encountered this? I'm pretty much out of ideas and planning to jump to in memory OLTP table types.

9 Comments
2024/12/02
10:07 UTC

6

In SQL Server Always on Availability Groups, how do I know what really casused one node to fail and then automatically move to the other node?

This happens so randomly even on idle instances. There doesn't seem to be any logs that states the root cause of what made the node failover? Is somone running an update on the server, is it a tlog backup, is it a cpu or memory spike.. Nothing in the logs or the Event Viewer.

So, how do I know?

21 Comments
2024/12/01
20:10 UTC

1

Testing Always On Listener

We have SQL server std Ed in our prod env and because of this, each AG can only have one database in it. We want to add a new AG + Listener and add a new db to it. I have had a new DNS setup and IP, but I want to know how we can check the firewall will allow access from the application server to the new listener IP/DNS. Are there tests we can do?

Will we only be able to test once the AG is actually setup with listener, parhaps with no db in it yet?

Sorry, maybe a silly question

14 Comments
2024/11/30
21:13 UTC

19

Do you people actually use "statisticsparser.com" during performance tuning

I found out about this client-side JS website from Brent Ozar's streams couple of years ago. I am just wondering how widely used this is in your exp

I had mentioned about this website about a week ago in a Youtube video and I got a comment tearing me apart for using this and how in databases security is more important than performance and something like this can get people sacked and basically called me a fricking dumba** who will amount to nothing (paraphrasing). To be fair, I've never personally used this at my professional work environments bc I'm always extra cautious.

Now, I am wondering if anyone has uses it as part of their jobs

(BTW I didn't claim to be a sql server expert in the short video I had made. I'm a data engineer just doing videos as a way to document stuff I've learned about perf through out the years for myself, my videos are not meant to be replacement for actual experts/professional training/documentation)

18 Comments
2024/11/30
07:19 UTC

7

Best free courses for beginners to become production ready SQLserver DBA

Hi people, I am looking for some options to get that confidence to tackle production servers either its monitoring troubleshooting or TSQL programming. Would like to invest real time and looking for that course that I get that real handson.

Any suggestions are really appreciated.

10 Comments
2024/11/29
19:25 UTC

7

Have you migrated Fact Tables and dimension tables from SQL Server to AWS Databricks? If yes, how did you handle it?

we are migrating our BI DWH (Dimension Tables and Fact tables) from SQL Server to Databricks.

The exisitng data load set up in SSIS is this:

Soruce - SQL Server Raw table

Stage: Get only latest updated data based on timestamp column

Final layer: Use Merge statement to update the final table (All merge statements are Stored Procedures)

Now we want to keep the same Secondary keys that generated on SQL server in our Unity Catalogue as well.

My appraoch is to keep one notebook for each table , so I can make sure to maintain logic for each and every table speararetly

let me know if there is any other better way to do it

or if you have done migration from sql server to databricks , please let me know how did you guys did it.

14 Comments
2024/11/29
14:42 UTC

2

SQL 2019 Availability Group backup preferences

We have three AGs in a 3-node cluster, each node has two instances. I am trying to reconcile the backup preference settings with sys.fn_hadr_backup_is_preferred_replica(db) and where the backups actually occur. Each AG is set to Prefer Secondary, full backups occur on the primary while log backups run on node 3.

AG01 - 50 P / 50 S / 100 S (fn=1 for node 3)

AG02 - 50 S / 50 P / 100 S (fn=1 for node 3)

AG03 - 50 S / 50 P / 50 S (fn=1 for node 1)

Why are the full backups for all 3 AGs running on the primary? Why are the log backups for AG03 running on node 3 rather than node 1?

2 Comments
2024/11/28
08:41 UTC

3

SQL Server documentation

Hey folks, I’m curious, what would you like to see done different in SQL Server documentation?

What do you see in other product content that you’d like to see in SQL Server content?

6 Comments
2024/11/28
01:55 UTC

26

Sharing my personal project

A few years back I started working on PSBlitz - a PowerShell script that automates the collection of SQL Server diagnostics data and outputs it in portable and user friendly format (HTML and Excel). It also saves execution plans and deadlock graphs as .sqlplan and .xdl files.

PSBlitz leverages modified, non-stored procedure, versions of Brent Ozar's SQL Server First Responder Kit, along with some custom diagnostics queries.

Since then I've been working on it in my spare time to add more features and tweak various things.

Any feedback, suggestions, and valid PRs are welcomed.

https://github.com/VladDBA/PSBlitz

8 Comments
2024/11/28
01:41 UTC

2

MONEY Column Has Incorrect Value when Read

I've got a real headscratcher here. I have SQL Server 2019, and we've only observed the problem in one of our environments and we have not been able to reproduce it anywhere else. It does not happen every time, but its not rare either, It probably happens about 50% of the time in the one environment where it does occur.

In one of our tables, we have two MONEY Columns. (Yes, I know MONEY is considered to be bad.)

...
[amount] MONEY NOT NULL DEFAULT ((0)),
[originalAmount] MONEY NOT NULL DEFAULT ((0)),
...

Initialially a row gets inserted and both of these values are inserted as Zeros. Later during our process, they are both updated together with a single parameterized statement.

UPDATE [table] set amount = @amount, originalAmount = @amount WHERE  ...

That update is being called from some C# code, and we have verified that the C# code is using the correct value for the @amount parameter. Just to make sure we going insane, we added a trigger on that table, that records the INSERTED and DELETED values into a text message in another table. At the time that the trigger runs, the values being written to the table are correct.

After this when we read the values back, we get some unexplanable results. Say for example, we set the amount to 5988.20, using the above UPDATE statement. When we read the values back we get:

amount  | originalAmount
5988.20 | 115292150466673.5176

As you may no the MONEY data type is 8 bytes, encoded as an integer, with an assumed 4 decimal places. Consider the following:

DECLARE
  @amount MONEY = 5988.8200,
  @originalAmount MONEY = 115292150466673.5176
SELECT
  @amount as [MoneyType]
  ,CAST(@amount AS BINARY(8)) as [MoneyBinary8]
UNION
SELECT
  @originalAmount as [MoneyType]
  ,CAST(@originalAmount AS BINARY(8)) as [MoneyBinary8]

You'll get the result:

MoneyType            MoneyBinary8
5988.82              0x000000000391D248
115292150466673.5176 0x100000000391D248

So what i have worked out is that when the problem occurs, the value stored in originalAmount has some extra bits set in the high byte of the MONEY column. The extra bits that get set are not always the same, but are always in the high 8 bits.

Since the amount and OriginalAmount both get set at the same time, and the amount Field is always correct, and the debug data recorded from the update trigger tell me the correct value is being sent to SQL Server, what could explain one field being updated read back correctly and the other being updated to the same value and read back incorrectly?

This might be a red herring, but a piece of the puzzle might be the underlying table structure, In the environment where the problem occurs, the table has been upgraded and the originalAmount column was added later. I am guessing that this affects the order that the data is stored on the data pages.

23 Comments
2024/11/27
19:50 UTC

6

Query incredibly slow even with limited fields.

Ok, I was tasked earlier today with optimizing another Script. The guy told me that part of his script has been running for over a day (yes, you read that right).

So he provided me a copy (named differently) that I can play around with.

The initial select statement, a simple SELECT * FROM...takes over 30 seconds to run and return over 500,000 records. I can't really figure out why. It does this even when I reduce the number of columns in the query.

I've even tried selecting the initial data into a temporary table (which is relatively fast), and then running the select operation on my #temp table, and it still takes over 30 seconds to run.

The only thing I can think of is to try to apply indexes to the temp table, and use that.

Are there any other sort of optimization things I can do? I suspect this query is part of what's causing his overall script to run as slowly as it is.

Any thoughts?

UPDATE:

It seems I've narrowed it down to a couple of update statements, oddly.

The problem is, when I run them as part of the "larger" batch, they each take something between 20 and 30 seconds each to run. When I run them individually, however, they run much quicker. Not sure what causes that. Gonna have to track that down on Monday.

38 Comments
2024/11/27
19:04 UTC

5

Using # temp tables is much slower than normal tables?

UPDATE: in a perfect example of premature optimization, the time delays only became visible using statistics on the client code, not SQLS. The error turned out to be in a private library using internal error reporting. When using a temp table the code failed to look in the right place for the table DDL (which, honestly, shouldn't be a problem in the first place, but...) and was writing 14000 errors to a List(Of... which was completely invisible.

After correcting for this problem, the inserts now take 1.8 seconds using temp, faster than "main" tables as I suspected, and most of that is on the client side gathering data into the DataTable.

Original post follows:

I have code that:

  1. drops a table of the form [Tablename_Temp], if found
  2. builds it with SELECT * FROM [Tablename] INTO [Tablename_Temp] WHERE 1=0
  3. builds a DataTable in memory
  4. BuikInserts the DataTable into the temp table
  5. uses MERGE to move the data from the temp table to production

I initially ran this on "normal" tables with names like Accounts_Temp so I could easily verify the results in SMS. On average this took about 4 seconds to upload 12000 rows in 20 tables.

Once I was happy with the results, I added #'s, so Accounts_Temp became #Accounts_Temp. This took an average of 13 seconds for the same data.

This was very surprising. I would have expected that using # would be faster, as I understand it turns off any number of bits of internal code that would run on a normal table.

I thought perhaps the non-# versions were getting indexes, although I doubted it, and indeed, they are not.

What might explain this?

27 Comments
2024/11/27
18:11 UTC

8

Can somebody help tell me what our DBA's are doing wrong and why they need SSMS14?

For starters I'm a System's Engineer/Admin, but I do dabble in scripting/DevOps stuff including SQL from time to time. Anyways here's the current situation.

We are migrating our DBA's to laptops and they insist that they need SQL Server Management Studio 2014 installed with the Team Foundation plug-in. The 2 big points they make with needing this 10 year old tool is Source Control and debugging. Our Source Control is currently Team Foundation Server (TFVC).

I just met with one of the head DBA's yesterday for an hour and he was kinda showing me how they work and how they use each tool they have and this is the breakdown.

  • SSMS14 - Connect to TFVC, Open SQL Server Mgmt Studio Solution files and/or SQL Server Project files. This allows them to open a source controlled version of those files and it shows up in Solution Explorer showing the connections, queries like this.

  • SSMS18/19 - Source control was removed by Microsoft so they can do the same thing as SSMS14 EXCEPT it's not source controlled.

  • Visual Studio 2019 - Can connect to source control, but DBA's words are that modifying the different SQL files within the project/solution isn't good enough.

Example 1 of a SQL Project and files

Example 2 of a SQL Project and files

So again I'm not an expert when it comes to SQL nor Visual Studio, but this seems like our DBA's just being lazy and not researching the new way of doing things. They got rid of source control in SSM18/19, but I feel like it can be done in VS 2019 or Azure Data Studio. Something I was thinking is why can't they just use VS 2019 for Source Control > check out a project > make changes locally in SSMS 18 > save locally > push changes back in VS2019, this is pretty much what I do with Git and my source controlled scripts.

Anyone have any advice or been in the same situation?

72 Comments
2024/11/27
16:27 UTC

1

Running SQL Server Standard w/ Software Assurance in Azure Virtual Desktop

I am hoping someone can help verify what I am trying to do here but I swear Microsoft tries their best to make it difficult to find information around licensing usage entitlements.

I am working with a nonprofit that uses Sage 300 and wants to entertain moving it to AVD. I know that you must use a subscription-based model to run SQL in Azure or look at one of the SQL as a service product.

Long story short - As a nonprofit profit they can buy SQL/CAL licensing with software assurance for dirt cheap and the vendor won't support the SQL as a service option. My issue is the mobility rights and Azure Hybrid Benefit documentation is not crystal clear on if I can use this version in Azure. Sounds like I can with a mobility partner but can I in Azure?

0 Comments
2024/11/27
14:20 UTC

0

Can anyone help me ? I keep getting the error "A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections"

I'm on sql server 2019 with SSMS. Error 40. I'm a totaly newbie. This error is something I've been struggling with since yesterday when I installed the server. I can't connect to the server. The only thing that I could think of was the firewall warning I saw in the installation process, but I could be wrong. Can anyone help ?

22 Comments
2024/11/27
14:00 UTC

3

performance test tools with real data

Hello,

SMB single IT guy here ;).

We use a few databases on an MS SQL Server 2016.

We are discussing moving our servers from vmware to a different system, and also to different hardware.

I want to test potential SQL-Server performance beforehand.

I am looking for a possibillity to take a database backup, and then record the real transactions over a day. I would then like to use the real transaction to test performance on the new server.

Is there a tool for that?

Thank you

Daniel

5 Comments
2024/11/27
07:34 UTC

6

Losing connection when installing MS updates

Asking if others have seen that behaviour. This is the scenario: 2-replica 2-node Always On SQL Server cluster in an active/passive configuration.

We begin with installing the monthly Microsoft OS patches on the secondary replica. So far so good. Then the actual SQL Server updates kick off. At that very moment, the application loses connectivity to the database.

Doesn’t make sense to me since primary replica remains intact. But it can’t be reached.

Cluster events show the error in the image.

After update is finished, secondary node is rebooted and when it comes back, connectivity to the primary is re-established.

We outsourced the DB support to an external company and they believe the issue is network. Im not a DBA just a tech but I disagree with them as it only occurs when updating SQL Server.

This has been happening since we went live a few months ago.

Any ideas on what could be causing this?

16 Comments
2024/11/27
01:59 UTC

1

Data Repository for reporting

Hi

Just wondering what people thoughts on using Azure to host SQL database, which will take snapshots of data from our on-premise server. We have end-users from all over the globe using power BI for reporting.

With Azure, we will point everyone and everything here for reporting, Power BI, Excel, R Studio. The aim would be to remove all our hogging data processing to the cloud and leave on-prem for sole production related task.

Or

Should we aim to do this on-prem, is there a benefit ?

4 Comments
2024/11/26
16:10 UTC

1

Azure SQL MI link not staying in sync

I ran in to an odd scenario in development yesterday. We have been testing the SQL MI Link feature for some time and it has worked well. It's a decently large database, so it takes a couple hours to set up the MI link and seed the data. Through our app we had users running some disk intensive processes and when we checked the MI Link we found that it was not keeping up with the primary DB. The MI link is set up in async mode. The database has 4 data files and is approx ~400gb. The MI itself is set up as a General Purpose,4 core, premium series hardware (for a core to memory ratio). A user reported that changes were not being updated in the MI database. When looking at sys.dm_hadr_database_replica_states everything showed synchronized and healthy but the secondary_lag_seconds was high and would not go down even after a couple hours. It was like it had stopped synchronizing data. I paused and resumed data movement a couple times but that did not help, and then I tried resizing the MI to be 8 cores just to see if that helped, but it didn't. As a last resort today I am tearing down the MI link and setting it back up, but having multiple hours of down time is not going to work in PROD. Has anyone seen this behavior with MI link.

12 Comments
2024/11/26
14:31 UTC

2

SQL Server 2017 Patching issues

Hi All,

We have a SQL 2017 Server (CU31 2022/09/20 14.0.3456.2 2027/10/12)

Now I noticed the following are the latest.

Cumulative UpdateRelease DateBuildSupport Ends
CU31 GDR2024/11/1214.0.3485.12027/10/12

Do I need to install all other 5 patches or can I install this patch?

2 Comments
2024/11/26
05:21 UTC

1

SQL Server installation for learning

Hello.

I would like to gain experience working with SQL Server tools, SSIS in particular. I am not looking to become a DBA but I do want to learn DB management features, like performance tuning / performance monitoring, Execution Plans, system tables usage, job scheduling, etc.

I have been working with SQL Server Database as a developer for a few years, writing ETL processes using Stored Procedures.

I originally was looking at "SQL Server 2017 Integration Services Cookbook". It goes over the instructions how to install the database and the Data Tools. Since these versions of SQL Server and Data Tools are no longer available (and outdated), I started looking at on-line documentation from Microsoft. The information is more scattered and confusing.

My ideas is to buy a laptop with Windows OS, install SQL Sever 2019 or 2022, and then SQL Server Data Tools. I am currently unsure what version of SQL Server Database I should download. I think I can't have SQL Server Data Tool using SQL Server Express edition. I need to get Developer Edition, is it correct? Is there an advantage of picking 2022 v. 2019. I am leaning towards 2019 version as I hope there is more material/books on 2019 since it has been around longer.

Any advice is greatly appreciated. If you know of a book (or two or three) I could follow to accomplish these task, please let me know.

Thank you.

3 Comments
2024/11/26
00:19 UTC

3

Data Synchronization from SQL Server Database -> SalesForce

Good afternoon. I have a SQL server database and I need to synchronize the data in real time for Sales Force. Does anyone know the best approach to synchronize this data? Thank you.

15 Comments
2024/11/25
17:59 UTC

11

SQL Server 2025 Private Preview

Anyone ever successfully applied and would like to share process / benefits / caveats of onboarding the platform as early adopters?

10 Comments
2024/11/25
17:38 UTC

2

SQL Failover / Replication

We are currently building on our disaster recovery model to have a new failover site for our mission-critical SQL database. What would people think here is the best solution/tool to do this ? Our internal IT team have Veeam backup available SQL Server and would apply a backup to the failover site. However, i am thinking we should be using SQL Server AlwaysOn Failover service as this wouldn't evolve any management if the primary SQL server goes down

16 Comments
2024/11/25
16:08 UTC

3

SSDT - Unable to reference 'master' or 'msdb' with new sdk style project

I'm trying to follow the process found at this documentation:
https://learn.microsoft.com/en-us/sql/tools/sql-database-projects/howto/convert-original-sql-project?view=sql-server-ver16&pivots=sq1-visual-studio-sdk

And for the most part the upgrade was easy. I only have one major blocker, which is that the project is unable to reference objects found in the msdb or master db. In the old project format we were able to reference system databases and that does not appear to be available in the new project format.

I thought that the new project style was supposed to support nuget, but when I try to add the Microsoft.SqlServer.Dacpacs.Master nuget as a reference i get the following error:

{
Attempting to gather dependency information for package ‘Microsoft.SqlServer.Dacpacs.Master.160.2.2’ with respect to project ‘Database’, targeting ‘.NETFramework,Version=v4.7.2’
Gathering dependency information took 82 ms
Attempting to resolve dependencies for package ‘Microsoft.SqlServer.Dacpacs.Master.160.2.2’ with DependencyBehavior ‘Lowest’
Resolving dependency information took 0 ms
Resolving actions to install package ‘Microsoft.SqlServer.Dacpacs.Master.160.2.2’
Resolved actions to install package ‘Microsoft.SqlServer.Dacpacs.Master.160.2.2’
Install failed. Rolling back…
Package ‘Microsoft.SqlServer.Dacpacs.Master.160.2.2’ does not exist in project ‘Database’
Package ‘Microsoft.SqlServer.Dacpacs.Master.160.2.2’ does not exist in folder ‘C:<path>\Database1\packages’
Executing nuget actions took 76 ms
Package ‘Microsoft.SqlServer.Dacpacs.Master 160.2.2’ has a package type ‘DACPAC’ that is not supported by project ‘Database’.
}

Example Error:

Procedure: [dbo].[sp_XXXX] contains an unresolved reference to an object. Either the object does not exist or the reference is ambiguous because it could refer to any of the following objects: [sys].[indexes].[I]::[name], [sys].[indexes].[name], [sys].[objects].[I]::[name] or [sys].[schemas].[I]::[name].

Procedure: [dbo].[sp_XXXX] has an unresolved reference to object [dbo].[sp_executesql].[@replacementValueOUT].

Anyone have any suggestions?

7 Comments
2024/11/25
15:30 UTC

0

SSMS Vent\Rant

Is Microsoft ever going to release a version of SSMS that doesn't freeze and/or crash and restart?!?!?!? I get my hopes up with every new release for the problem continues. It's quite ridiculous. We should be able to leave a few windows open with connections.

31 Comments
2024/11/25
14:39 UTC

1

SSDT - Unable to reference 'master' or 'msdb' with new sdk style project

I'm trying to follow the process found at this documentation:
https://learn.microsoft.com/en-us/sql/tools/sql-database-projects/howto/convert-original-sql-project?view=sql-server-ver16&pivots=sq1-visual-studio-sdk

And for the most part the upgrade was easy. I only have one major blocker, which is that the project is unable to reference objects found in the msdb or master db. In the old project format we were able to reference system databases and that does not appear to be available in the new project format.

I thought that the new project style was supposed to support nuget, but when I try to add the Microsoft.SqlServer.Dacpacs.Master nuget as a reference i get the following error:

{
Attempting to gather dependency information for package ‘Microsoft.SqlServer.Dacpacs.Master.160.2.2’ with respect to project ‘Database’, targeting ‘.NETFramework,Version=v4.7.2’
Gathering dependency information took 82 ms
Attempting to resolve dependencies for package ‘Microsoft.SqlServer.Dacpacs.Master.160.2.2’ with DependencyBehavior ‘Lowest’
Resolving dependency information took 0 ms
Resolving actions to install package ‘Microsoft.SqlServer.Dacpacs.Master.160.2.2’
Resolved actions to install package ‘Microsoft.SqlServer.Dacpacs.Master.160.2.2’
Install failed. Rolling back…
Package ‘Microsoft.SqlServer.Dacpacs.Master.160.2.2’ does not exist in project ‘Database’
Package ‘Microsoft.SqlServer.Dacpacs.Master.160.2.2’ does not exist in folder ‘C:<path>\Database1\packages’
Executing nuget actions took 76 ms
Package ‘Microsoft.SqlServer.Dacpacs.Master 160.2.2’ has a package type ‘DACPAC’ that is not supported by project ‘Database’.
}

Example Error:

Procedure: [dbo].[sp_XXXX] contains an unresolved reference to an object. Either the object does not exist or the reference is ambiguous because it could refer to any of the following objects: [sys].[indexes].[I]::[name], [sys].[indexes].[name], [sys].[objects].[I]::[name] or [sys].[schemas].[I]::[name].

Procedure: [dbo].[sp_XXXX] has an unresolved reference to object [dbo].[sp_executesql].[@replacementValueOUT].

Anyone have any suggestions?

0 Comments
2024/11/25
14:37 UTC

2

Log Reuse Wait Behavior Question - Backup on AG1 Causing AVAILABILITY_GROUP Wait on AG2 in Distributed AG Setup

I have a question about log_reuse_wait behavior in our Distributed AG setup:

Setup:

  • 2 FCIs, each hosting a single-replica AG (AG1 and AG2)
  • These AGs are connected via Distributed AG
  • AG1 contains very large databases

Issue observed:

  • When running backup on AG1's large database:
    • The database in AG1 shows log_reuse_wait_desc = ACTIVE_BACKUP_OR_RESTORE
    • The corresponding database on AG2 shows log_reuse_wait_desc = AVAILABILITY_GROUP
  • Once backup on AG1 completes
    • AG2's AVAILABILITY_GROUP wait immediately clears
    • Distributed AG replication continues normally

Question:

Why does AG2 show AVAILABILITY_GROUP wait during AG1's backup operation?

2 Comments
2024/11/25
12:28 UTC

3

Checking backups across a cluster

If I have a database outside an AG, I can query the msdb tables to verify backup info. But is there a way to confirm when the last backup occurred for a database within a cluster? Say I have three nodes and I don't know if the backups are running or where they are running, or perhaps they do not always run on the same replica. If I have a job that runs each morning on each replica to check if the backups ran, how would I verify that AG databases were backed up last night?

7 Comments
2024/11/25
08:26 UTC

Back To Top