/r/SQLServer
Microsoft SQL Server Administration and T-SQL Programming including sql tutorials, training, MS SQL Server Certification, SQL Server Database Resources.
Microsoft SQL Server Administration and T-SQL Programming including sql tutorials, training, MS SQL Server Certification, SQL Server Database Resources.
You might also be interested in:
/r/SQLServer
I am facing an issue with inserting data into a temp table inside a stored procedure in SQL Server, specifically when executing nested stored procedures.
Scenario:
I have an outer stored procedure pocsaveseat, where I call another stored procedure pocseatvalidations and try to insert its result set into a temp table.
pocseatvalidations in turn calls another stored procedure pocaccessbyfetch, which also returns a result set and it is inserted into a temptable in pocseatvalidations stored procedure.
The problem arises when I insert data into the temp table in pocsaveseat by executing pocseatvalidations. The result data from pocseatvalidations is not getting inserted into the temp table.
What Works: (if I follow any of below)
If I comment out the INSERT INTO statement in pocsaveseat and just call pocseatvalidations, the result is returned as expected.
If I run pocseatvalidations independently, it returns the correct results.
If I comment execution of pocaccessbyfetch stored procedure inside pocseatvalidations, it works.
My Question:
What could be causing the issue where data from pocseatvalidations is not being inserted into the temp table in pocsaveseat? Are there any nuances with temp tables, session handling, or nested stored procedures that I might be overlooking? Any suggestions for debugging this further or alternative approaches to achieve the desired result? Note: Temp tables have unique names in these stored procedures
Thanks in advance
I had a look at this post: Issue with patching for SQL server : r/SQLServer but it didn't entirely address my issue.
We're living on the edge and using WSUS to advise us when to patch our DBs (we don't have a dedicated DB admin to do this and keep track of it). We're trying to apply the KB5046856 patch but it always fails. The Summary.txt file was weird, too (the Exception help link was, no surprise, not helpful). We've tried restarting and then applying the patch - nope. I'm thinking we next reboot, and apply the downloaded patch.
Any other ideas that don't require a herd of goats to appease the SQL deities?
All DBs are on a supported version/level to be patched (13.3.7029.3).
Overall summary:
Final result: The patch installer has failed to update the following instance:. To determine the reason for failure, review the log files.
Exit code (Decimal): -2146233080
Exit facility code: 19
Exit error code: 5384
Exit message: Index was outside the bounds of the array.
Start time: 2024-12-01 05:00:27
End time: 2024-12-01 05:03:48
Requested action: Patch
Exception help link: https://go.microsoft.com/fwlink?LinkId=20476&ProdName=Microsoft+SQL+Server&EvtSrc=setup.rll&EvtID=50000&ProdVer=13.0.7050.2&EvtType=0x0E17F4C7%400x724C4CE8&EvtType=0x0E17F4C7%400x724C4CE8
Exception summary:
The following is an exception stack listing the exceptions in outermost to innermost order
Inner exceptions are being indented
Exception type: System.IndexOutOfRangeException
Message:
Index was outside the bounds of the array.
HResult : 0x80131508
Data:
DisableWatson = true
Stack:
at Microsoft.SqlServer.Configuration.MsiExtension.FileLockedStatusCheckAction.CalculateLockingProcessesForPatchableFiles()
at Microsoft.SqlServer.Configuration.MsiExtension.FileLockedStatusCheckAction.ExecuteAction(String actionId)
at Microsoft.SqlServer.Chainer.Infrastructure.Action.Execute(String actionId, TextWriter errorStream)
at Microsoft.SqlServer.Setup.Chainer.Workflow.ActionInvocation.<>c__DisplayClasse.<ExecuteActionWithRetryHelper>b__b()
at Microsoft.SqlServer.Setup.Chainer.Workflow.ActionInvocation.ExecuteActionHelper(ActionWorker workerDelegate)
I'm hoping someone here can shed some light on this issue. Started experiencing a very strange issue with SSRS 2022. Yesterday at around 10:05AM, the SQL Server has restarted without a clear indication. Ever since that restart happened all of the reports started taking about 1-2 minutes to run rather than a few seconds. The ReportServer database is stored on a different SQL Server that didn't have any restarts etc.
I started testing the connection on Data Sources and noticed that they take around 20 seconds to connect. I then tried a few different AD accounts and some accounts were connecting instantly. I also tried a database user instead of using windows authentication and that was also instant. I've checked all the logs, Kerberos config, SPN etc everything is configured properly and no clear indication what the issue is from. Has anyone experienced anything similar? I changed all the data sources to use a different account for now, but ideally I'd like to go back to using the service account.
I have another strange issue that might be in someway related. I had 2 users that can't access the new SSRS Report Server using the Report Builder. It will say that the server doesn't exist. I found out that if these members are part of certain AD groups it won't let them connect. These AD groups have absolutely nothing to do with SSRS are they are just AD groups that give permissions to certain file shares. Both very strange issues.
I'm not sure this is allowed here or not, and if not I'm sure a mod will delete it. Let's say I have a SQL Server application which is useful to shops running SQL Server which I would like to start selling independently. Where and how should I promote such an application? This is something I developed as an independent contractor and have installed for several customers over a period of several years, so it's had a lot of running experience in production environments, but it was always just part of my normal services. I would now like to offer it independent of my normal services. I don't really want to get into what it does because I don't want this post to be promotion. Any advice is welcome. The program is feature complete, but I typically have manually installed it when needed. I'm now working on an installer package to install it and should have that ready in a few weeks.
Hello everyone I am junior software developer, working on dot.net technology, in my organisation sql server are used as database, while development most of the part are done with sql quers like store procedures, transaction statements, i get more interest in working on this sql statements, i want to know that what is future scope available for if i give more time to sql server to learning, what good opportunity i get or its limited.
Also suggest me free certification course on SQL server to gain expertise.
I have two servers on SQL Server 2019 that randomly decided to stop the SQL Server Browser service. When attempting to restart it, it turns itself back off immediately. Using the command prompt below to start in console mode on the database host server for more feedback, I see that it is starting up successfully, but gets to the point "Found no installed SQL engine instances -- not listening on SSRP." which is where it shuts itself down again.
C:\Windows\system32>"C:\Program Files (x86)\Microsoft SQL Server\90\Shared\sqlbrowser.exe" -c
SQLBrowser: starting up in console mode
SQLBrowser: starting up SSRP redirection service
SQLBrowser is successfully listening on ::[1434]
SQLBrowser is successfully listening on 0.0.0.0[1434]
SQLBrowser: Found no installed SQL engine instances -- not listening on SSRP.
SQLBrowser: Both SSRP and OLAP redirection services are disabled. Shutting down browser service
The only info I can find on this references the missing or disabled registry key for SSRP, which I have already verified is correctly in place.
HKLM\SOFTWARE\Wow6432Node\Microsoft\Microsoft SQL Server\90\SQL Browser\SSRPListener
Key is set to a value of 1. To validate, I set it to 0, at which point SQL Browser doesn't start up at all. Returning to 1, it returns to the original behavior of starting successfully and then shutting itself off immediately.
Meanwhile the databases themselves are accessible, so I know there ARE in fact installed SQL engine instances that should be visible/listening, and I'm completely baffled on how to move forward here. Any slim shred of an idea would be appreciated.
I was trying to recover space by getting rid of some unique indexes. They are based on guids. I later found out that the code needs those indexes to protect the data. Many times they will add duplicate data in the program. I would like to get the program fixed vs giving the user an error. Is there a way maybe a DMV or something that tracks these errors? Sometimes I can see them in the event viewer, but I am thinking permissions are going to be difficult to run code to export those results. Basically when we get an error I want our team notified so we can talk to the user to see what they were specifically doing at the time. Many of these issues were not reproducible in development environment.
I have a package I havn't touched for a while. When I go to a dataflow task (the only one in this package) it's a blank window, as though there were no steps. I know steps exist because they are throwing errors, but I can also see them in the code behind, yet when the step is actually opened they're not seemingly there. My best guess is they're off the screen somehow and the window is just looking in the wrong place, but this does present a problem for fixing them. Does anyone know a way I can reset the view? I've tried zooming out to the max amount.
Weird, I just tried adding two new things from the toolbar and they don't show up either, so now I'm completely confused.
I've been asked to install Sql Server on a dedicated server that only has 2 physical (1TB) SSD drives. I'm dealing with a single 36GB db. This will sit behind a web (server) based app on a different server, so lots of little reads and writes, I know the overall setup is not ideal but it is what it is, I'm thinking:
C: OS, SQL Server
D: Tempdb, system and user data and logs
or would you recommend something different like moving the TempDb to C?
I wonder if anybody can explain what the parameter REQUIRED_SYNCHRONIZED_SECONDARIES_TO_COMMIT is used for. This is used on Always On Availability Group configurations. We are running SQL Server 2022 and we have that set to "1". We have 2 replicas in an active/passive configuration. I'm thinking we need to change the parameter to "0" to avoid the instance to "freeze" when the secondary replica is not fully available. Am I wrong?
I've set up transactional replication in SQL Server, and I'm at the stage where the Distribution Agent is applying the initial snapshot to the subscriber. This process is taking quite some time due to the large data volume.
Is there a way to monitor the progress of the distribution of the initial snapshot? I know in Replication Monitor I can get the distribution messages like "Bulk copying data into table ... (xx rows)". But different table has different row count. It's still hard to estimated time remaining.
Any insights would be greatly appreciated.
I've got a strange issue where I'm getting tempdb contention on wait_resource 2:1:128 which DBCC PAGE tells me is sys.sysobjvalues. I either get PAGELATCH_EX/SH and CXCONSUMER waits. Every query will have a blocking session id that doesn't correlate to any session in blitzwho or whoisactive, however in the transaction log for tempdb I can find FCheckAndCleanupCachedTempTable with the SPID that blocked the other queries.
I am on SQL Server 2019 which Microsofts advice is not to enable trace flags 1117, 1118. However Microsoft does have a CU for for SQL Server 2016 KB4131193 although I don't go past 1000 active tempdb tables.
I've investigated TempDB caching, and removed all DDLs to tempdb's (only create table is left), I've reduced my highest TempDB consuming queries. I've checked tempdb autogrowth settings and log sizes, I've looked for autogrowth events. Every TempDB is sized the same.
We do use tempdb and TVPs a lot. And all files (tempdb/database) are on a SAN via SCSI. Standard Edition so can't use Memory Optimized TempDB metadata.
I have 12 tempdb files on 24 cores. I increased from 8 when this started happening.
Is there anything else i can look for? Has anyone else encountered this? I'm pretty much out of ideas and planning to jump to in memory OLTP table types.
This happens so randomly even on idle instances. There doesn't seem to be any logs that states the root cause of what made the node failover? Is somone running an update on the server, is it a tlog backup, is it a cpu or memory spike.. Nothing in the logs or the Event Viewer.
So, how do I know?
We have SQL server std Ed in our prod env and because of this, each AG can only have one database in it. We want to add a new AG + Listener and add a new db to it. I have had a new DNS setup and IP, but I want to know how we can check the firewall will allow access from the application server to the new listener IP/DNS. Are there tests we can do?
Will we only be able to test once the AG is actually setup with listener, parhaps with no db in it yet?
Sorry, maybe a silly question
I found out about this client-side JS website from Brent Ozar's streams couple of years ago. I am just wondering how widely used this is in your exp
I had mentioned about this website about a week ago in a Youtube video and I got a comment tearing me apart for using this and how in databases security is more important than performance and something like this can get people sacked and basically called me a fricking dumba** who will amount to nothing (paraphrasing). To be fair, I've never personally used this at my professional work environments bc I'm always extra cautious.
Now, I am wondering if anyone has uses it as part of their jobs
(BTW I didn't claim to be a sql server expert in the short video I had made. I'm a data engineer just doing videos as a way to document stuff I've learned about perf through out the years for myself, my videos are not meant to be replacement for actual experts/professional training/documentation)
Hi people, I am looking for some options to get that confidence to tackle production servers either its monitoring troubleshooting or TSQL programming. Would like to invest real time and looking for that course that I get that real handson.
Any suggestions are really appreciated.
we are migrating our BI DWH (Dimension Tables and Fact tables) from SQL Server to Databricks.
The exisitng data load set up in SSIS is this:
Soruce - SQL Server Raw table
Stage: Get only latest updated data based on timestamp column
Final layer: Use Merge statement to update the final table (All merge statements are Stored Procedures)
Now we want to keep the same Secondary keys that generated on SQL server in our Unity Catalogue as well.
My appraoch is to keep one notebook for each table , so I can make sure to maintain logic for each and every table speararetly
let me know if there is any other better way to do it
or if you have done migration from sql server to databricks , please let me know how did you guys did it.
We have three AGs in a 3-node cluster, each node has two instances. I am trying to reconcile the backup preference settings with sys.fn_hadr_backup_is_preferred_replica(db) and where the backups actually occur. Each AG is set to Prefer Secondary, full backups occur on the primary while log backups run on node 3.
AG01 - 50 P / 50 S / 100 S (fn=1 for node 3)
AG02 - 50 S / 50 P / 100 S (fn=1 for node 3)
AG03 - 50 S / 50 P / 50 S (fn=1 for node 1)
Why are the full backups for all 3 AGs running on the primary? Why are the log backups for AG03 running on node 3 rather than node 1?
Hey folks, I’m curious, what would you like to see done different in SQL Server documentation?
What do you see in other product content that you’d like to see in SQL Server content?
A few years back I started working on PSBlitz - a PowerShell script that automates the collection of SQL Server diagnostics data and outputs it in portable and user friendly format (HTML and Excel). It also saves execution plans and deadlock graphs as .sqlplan and .xdl files.
PSBlitz leverages modified, non-stored procedure, versions of Brent Ozar's SQL Server First Responder Kit, along with some custom diagnostics queries.
Since then I've been working on it in my spare time to add more features and tweak various things.
Any feedback, suggestions, and valid PRs are welcomed.
I've got a real headscratcher here. I have SQL Server 2019, and we've only observed the problem in one of our environments and we have not been able to reproduce it anywhere else. It does not happen every time, but its not rare either, It probably happens about 50% of the time in the one environment where it does occur.
In one of our tables, we have two MONEY Columns. (Yes, I know MONEY is considered to be bad.)
...
[amount] MONEY NOT NULL DEFAULT ((0)),
[originalAmount] MONEY NOT NULL DEFAULT ((0)),
...
Initialially a row gets inserted and both of these values are inserted as Zeros. Later during our process, they are both updated together with a single parameterized statement.
UPDATE [table] set amount = @amount, originalAmount = @amount WHERE ...
That update is being called from some C# code, and we have verified that the C# code is using the correct value for the @amount parameter. Just to make sure we going insane, we added a trigger on that table, that records the INSERTED and DELETED values into a text message in another table. At the time that the trigger runs, the values being written to the table are correct.
After this when we read the values back, we get some unexplanable results. Say for example, we set the amount to 5988.20, using the above UPDATE statement. When we read the values back we get:
amount | originalAmount
5988.20 | 115292150466673.5176
As you may no the MONEY data type is 8 bytes, encoded as an integer, with an assumed 4 decimal places. Consider the following:
DECLARE
@amount MONEY = 5988.8200,
@originalAmount MONEY = 115292150466673.5176
SELECT
@amount as [MoneyType]
,CAST(@amount AS BINARY(8)) as [MoneyBinary8]
UNION
SELECT
@originalAmount as [MoneyType]
,CAST(@originalAmount AS BINARY(8)) as [MoneyBinary8]
You'll get the result:
MoneyType MoneyBinary8
5988.82 0x000000000391D248
115292150466673.5176 0x100000000391D248
So what i have worked out is that when the problem occurs, the value stored in originalAmount has some extra bits set in the high byte of the MONEY column. The extra bits that get set are not always the same, but are always in the high 8 bits.
Since the amount and OriginalAmount both get set at the same time, and the amount Field is always correct, and the debug data recorded from the update trigger tell me the correct value is being sent to SQL Server, what could explain one field being updated read back correctly and the other being updated to the same value and read back incorrectly?
This might be a red herring, but a piece of the puzzle might be the underlying table structure, In the environment where the problem occurs, the table has been upgraded and the originalAmount column was added later. I am guessing that this affects the order that the data is stored on the data pages.
Ok, I was tasked earlier today with optimizing another Script. The guy told me that part of his script has been running for over a day (yes, you read that right).
So he provided me a copy (named differently) that I can play around with.
The initial select statement, a simple SELECT * FROM...takes over 30 seconds to run and return over 500,000 records. I can't really figure out why. It does this even when I reduce the number of columns in the query.
I've even tried selecting the initial data into a temporary table (which is relatively fast), and then running the select operation on my #temp table, and it still takes over 30 seconds to run.
The only thing I can think of is to try to apply indexes to the temp table, and use that.
Are there any other sort of optimization things I can do? I suspect this query is part of what's causing his overall script to run as slowly as it is.
Any thoughts?
UPDATE:
It seems I've narrowed it down to a couple of update statements, oddly.
The problem is, when I run them as part of the "larger" batch, they each take something between 20 and 30 seconds each to run. When I run them individually, however, they run much quicker. Not sure what causes that. Gonna have to track that down on Monday.
UPDATE: in a perfect example of premature optimization, the time delays only became visible using statistics on the client code, not SQLS. The error turned out to be in a private library using internal error reporting. When using a temp table the code failed to look in the right place for the table DDL (which, honestly, shouldn't be a problem in the first place, but...) and was writing 14000 errors to a List(Of... which was completely invisible.
After correcting for this problem, the inserts now take 1.8 seconds using temp, faster than "main" tables as I suspected, and most of that is on the client side gathering data into the DataTable.
Original post follows:
I have code that:
SELECT * FROM [Tablename] INTO [Tablename_Temp] WHERE 1=0
I initially ran this on "normal" tables with names like Accounts_Temp
so I could easily verify the results in SMS. On average this took about 4 seconds to upload 12000 rows in 20 tables.
Once I was happy with the results, I added #'s, so Accounts_Temp
became #Accounts_Temp
. This took an average of 13 seconds for the same data.
This was very surprising. I would have expected that using # would be faster, as I understand it turns off any number of bits of internal code that would run on a normal table.
I thought perhaps the non-# versions were getting indexes, although I doubted it, and indeed, they are not.
What might explain this?
For starters I'm a System's Engineer/Admin, but I do dabble in scripting/DevOps stuff including SQL from time to time. Anyways here's the current situation.
We are migrating our DBA's to laptops and they insist that they need SQL Server Management Studio 2014 installed with the Team Foundation plug-in. The 2 big points they make with needing this 10 year old tool is Source Control and debugging. Our Source Control is currently Team Foundation Server (TFVC).
I just met with one of the head DBA's yesterday for an hour and he was kinda showing me how they work and how they use each tool they have and this is the breakdown.
SSMS14 - Connect to TFVC, Open SQL Server Mgmt Studio Solution files and/or SQL Server Project files. This allows them to open a source controlled version of those files and it shows up in Solution Explorer showing the connections, queries like this.
SSMS18/19 - Source control was removed by Microsoft so they can do the same thing as SSMS14 EXCEPT it's not source controlled.
Visual Studio 2019 - Can connect to source control, but DBA's words are that modifying the different SQL files within the project/solution isn't good enough.
Example 1 of a SQL Project and files
Example 2 of a SQL Project and files
So again I'm not an expert when it comes to SQL nor Visual Studio, but this seems like our DBA's just being lazy and not researching the new way of doing things. They got rid of source control in SSM18/19, but I feel like it can be done in VS 2019 or Azure Data Studio. Something I was thinking is why can't they just use VS 2019 for Source Control > check out a project > make changes locally in SSMS 18 > save locally > push changes back in VS2019, this is pretty much what I do with Git and my source controlled scripts.
Anyone have any advice or been in the same situation?
I am hoping someone can help verify what I am trying to do here but I swear Microsoft tries their best to make it difficult to find information around licensing usage entitlements.
I am working with a nonprofit that uses Sage 300 and wants to entertain moving it to AVD. I know that you must use a subscription-based model to run SQL in Azure or look at one of the SQL as a service product.
Long story short - As a nonprofit profit they can buy SQL/CAL licensing with software assurance for dirt cheap and the vendor won't support the SQL as a service option. My issue is the mobility rights and Azure Hybrid Benefit documentation is not crystal clear on if I can use this version in Azure. Sounds like I can with a mobility partner but can I in Azure?
I'm on sql server 2019 with SSMS. Error 40. I'm a totaly newbie. This error is something I've been struggling with since yesterday when I installed the server. I can't connect to the server. The only thing that I could think of was the firewall warning I saw in the installation process, but I could be wrong. Can anyone help ?
Hello,
SMB single IT guy here ;).
We use a few databases on an MS SQL Server 2016.
We are discussing moving our servers from vmware to a different system, and also to different hardware.
I want to test potential SQL-Server performance beforehand.
I am looking for a possibillity to take a database backup, and then record the real transactions over a day. I would then like to use the real transaction to test performance on the new server.
Is there a tool for that?
Thank you
Daniel
Asking if others have seen that behaviour. This is the scenario: 2-replica 2-node Always On SQL Server cluster in an active/passive configuration.
We begin with installing the monthly Microsoft OS patches on the secondary replica. So far so good. Then the actual SQL Server updates kick off. At that very moment, the application loses connectivity to the database.
Doesn’t make sense to me since primary replica remains intact. But it can’t be reached.
Cluster events show the error in the image.
After update is finished, secondary node is rebooted and when it comes back, connectivity to the primary is re-established.
We outsourced the DB support to an external company and they believe the issue is network. Im not a DBA just a tech but I disagree with them as it only occurs when updating SQL Server.
This has been happening since we went live a few months ago.
Any ideas on what could be causing this?
Hi
Just wondering what people thoughts on using Azure to host SQL database, which will take snapshots of data from our on-premise server. We have end-users from all over the globe using power BI for reporting.
With Azure, we will point everyone and everything here for reporting, Power BI, Excel, R Studio. The aim would be to remove all our hogging data processing to the cloud and leave on-prem for sole production related task.
Or
Should we aim to do this on-prem, is there a benefit ?
I ran in to an odd scenario in development yesterday. We have been testing the SQL MI Link feature for some time and it has worked well. It's a decently large database, so it takes a couple hours to set up the MI link and seed the data. Through our app we had users running some disk intensive processes and when we checked the MI Link we found that it was not keeping up with the primary DB. The MI link is set up in async mode. The database has 4 data files and is approx ~400gb. The MI itself is set up as a General Purpose,4 core, premium series hardware (for a core to memory ratio). A user reported that changes were not being updated in the MI database. When looking at sys.dm_hadr_database_replica_states everything showed synchronized and healthy but the secondary_lag_seconds was high and would not go down even after a couple hours. It was like it had stopped synchronizing data. I paused and resumed data movement a couple times but that did not help, and then I tried resizing the MI to be 8 cores just to see if that helped, but it didn't. As a last resort today I am tearing down the MI link and setting it back up, but having multiple hours of down time is not going to work in PROD. Has anyone seen this behavior with MI link.