/r/SQLServer
Microsoft SQL Server Administration and T-SQL Programming including sql tutorials, training, MS SQL Server Certification, SQL Server Database Resources.
Microsoft SQL Server Administration and T-SQL Programming including sql tutorials, training, MS SQL Server Certification, SQL Server Database Resources.
You might also be interested in:
/r/SQLServer
Hi everybody, I've tried searching for the answer to this in a lot of places, and so far turned up no concrete answers. As a disclaimer, I don't have much of a background in computer science or data science (mainly limited to writing SQL queries on existing databases).
I have SQL Server 2022 (a trial edition) and have been using this for a few weeks, as I need to create a database for a project I am trying to work through. Every time I restart my computer (every few days), I lose the ability to log in to my local database in SSMS. So far, the only reliable fix I have found is to fully uninstall the database and create a new one entirely. Obviously, that's a bad fix. For clarity, during setup I set this database to use mixed mode login, so I can set a defined password for the SA account instead of relying just on windows credentials (my company has gone passwordless, and I suspect that would have created difficulties with this if I were relying on windows credentials for login).
The database works fine, I can login with windows creds or the password UNTIL I restart, and then it no longer allows me in, saying "A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible."
Today, I was looking in SQL Server Configuration Manager, and in the SQL server services menu, I see that all of these are in the "stopped" state. When I try to turn them on, I get "The request failed or the service did not respond in a timely fashion." I have tried to start every service, and gotten the same error. The only one running is "SQL Server Browser".
This might be a much more complicated problem than I realize, but I am hoping it might be a simple fix. If anyone has a clue what could be causing this, I would be extremely appreciative!
Been using SQL Server since v6 so I know a lot of the arcane things that can happen under the covers when using nvarchar(max) as a column type. I've run into mysterious truncating at 4000 when concatenating with variables of different sizes. I even know that the size isn't about the number of characters, but the number of byte pairs.
I'll never say I know EVERYTHING, but I know more than the average person and I'm humble enough to question whether I'm right or wrong.
THE SCENARIO: I'm working with JSON data. I have an Input parameter for the json document and its type of NVARCHAR(MAX). I'm parsing it and saving the results to some tables but I am not saving the parameter's value.
In a code review, someone made the comment that using a varchar(max) variable or parameter causes data to be stored in tempdb. This didn't ring true to me but that's based on the things I know about how memory allocation for variables work in different systems. When I tried to research it, all of the information I was able to find addressed column storage concerns but I couldn't find anything about using it as a variable/parameter and any issues from a memory/storage performance aspect.
THE QUESTION: When using a variable/parameter (NOT as a column) of type NVARCHAR(MAX) what are the side potential pitfalls and side effects from a performance/memory perspective? Does anyone have any resources they can share about this?
The person reviewing my code was VERY adamant about this but he couldn't give a clear reason why and I'd like to either give a well articulated reason why his concerns are unfounded or learn what it is I'm missing.
Hi, so I work for a company that is basically just using excel spreadsheets for storing data and I think using sql would be better. Problem is that privacy is a concern for them and they aren't looking to just trust microsoft with it. Now I'm wondering the best way to set this up. My thoughts would be using some dedicated hardware somewhere on premise that is on the network, but how reasonable would this be for a small company with maybe just one person to maintain (it would be me and I understand sql, but don't really have experience setting up dbs except for personal projects)
Re: Killing remote connections SPIDs
Our nightly ETL is getting blocked by some remote connections. I know the login_name of the remote connection. I have a sql server agent job (Kill_SPID) that runs every 7 minutes during the ETL and has the below tsql in a cursor and then command KILL SPID on the result set. Unfortunately it is not working well. Thinking it must have something to do with the remote connection getting established and being kept open. As I sit here and execute sp_whoisactive I can see the SPID 137 come and go from the sp_whoisactive result set, each time when it is returned it has a different sql_text but the elapsed time (dd hh:mm:ss.mss) continues to grow, and there can be multiple active tasks returned with the same SPID 137 across multiple databases. Any suggestions on how to better kill spids?
SELECT distinct spid, rtrim(ltrim(p.loginame)) as loginname, db_name(p.dbid) as dbname FROM sys.sysprocesses P JOIN sys.sysdatabases D ON ( D.dbid = P.dbid ) WHERE rtrim(ltrim(loginame)) like 'remote_user_Store1' AND db_name(p.dbid) like'%' AND P.spid != @@SPID AND P.spid > 50
I wanted to check in with anyone who has implemented Azure premium SSDv2 on their azure sql VMs. MS has recommended that you run no less than P30 premium SSDv1 disks on your SQL servers. These disks come with 5000 IOPS and 200MB/s throughput. As you migrate to premium SSDv2 what settings are you choosing for IOPS and MB/s settings? I understand workload can be different, but is there a good starting point, a good middle of the road?
I'm going to be doing some testing of our app with the VM configure with premium SSDv2 and see what happens as I increase these numbers. I assume that they are closely tied together... you wouldn't want to do 10k IOPS and lead MB/s down at 200, you should probably scale those numbers together right?
I need to migrate database from SQL sever to PostgreSQL. Which tools I must use? It should not be done manually.
I'm posting a link for the SQL Server discord (that I recently discovered), just in case someone wants to use this type of communication:
Thank you
I am sql server DBA and i don't have any certifications and planning to get one so as DBA which certifications would be good .Like in suppose cloud (eg azure) so from where should i start
Hello everyone, I am fairly new to MSSQL Servers and I am currently trying out Extended Events for the first time but now I am stuck.
I am trying to collect some events to calculate database downtime down the road. The Events "sqlserver.database_started" and "sqlserver.database_stopped" sound very promising but immediatly I have recognized that the stopped event will not pick up manual "SET OFFLINE" statements. A manual start on the other hand will get picked up by the database_started event. So I have tried to include a "sql_statement_completed" event which picks up statements but for some reason not the "SET OFFLINE" one. What information am I missing? And if someone knows the specific events needed to capture all possible downtime scenarios, I would greatly appreciate it if you could share them. Thank you in advance for your help!
Edit: I am stupid. Instead of testing the “stopped” event raw I immediately included database name filter for the databases I needed but the database name for the stop even is always “master” so yeah… Thanks to the comments I will now expand on other states and events that would count as downtime.
Hello everyone, i was hoping you might be able to help.
I have recently upgraded our production SQL environment from 2014 to 2017. We run SSRS on a separate server pointing to a reportingservices DB on the database server. when i upgraded from 2014 to 2017, the ssrs instance was still working fine so i left it for a week or so.
Tonight i have gone to try upgrading SSRS to 2017 too (i've used a new server for that, so have them both existing at the same time) and when configured, it's not working.
I can view the SSRS web page, go into reports & data sources, but when i try to run a report, i'm getting a http service unavailable 503 error. It seems that i'm getting that error because SSRS is crashing and stopping every time i try running a report. In the error logs i see:
library!ReportServer_0-1!1458!01/27/2025-19:15:23:: i INFO: Call to GetItemTypeAction(/report. User: [MYUSER]
library!ReportServer_0-1!1458!01/27/2025-19:15:24:: i INFO: Catalog SQL Server Edition = EnterpriseCore
crypto!ReportServer_0-1!1458!01/27/2025-19:15:24:: i INFO: Initializing crypto as user: [SERVICEACCOUNT]
crypto!ReportServer_0-1!1458!01/27/2025-19:15:24:: i INFO: Exporting public key
crypto!ReportServer_0-1!1458!01/27/2025-19:15:24:: i INFO: Importing existing encryption key
rshost!rshost!1458!01/27/2025-19:15:27:: e ERROR: Generating a dump and exiting the process due to fatal runtime error.
I can't seem to figure out what is causing it at all. it would be amazing if anyone had any idea or knew of things i should try
Thanks!
Hi so we have table which contain crores of record where developer is trying to delete data for single day (24 hours),code is something like this
declare @row bigint =10000 while @row >0 begin delete top 10000 from table a where col a > 27012025 and col a>27012025 set @row = @@rowcount end
so let me know if my undertadning is correct or not 1>if there only 10000 rows for singe day then delete cycle comlete in 1 single cycle. 2>if there are 20000 rows for single day then delete cycle completes in 2 cycle. 3?if there are 100000 rows for single day then delete cycle completed in 10 cycle
right
Hi
I wanted to know whether plan explore still good for analyzing sql server execution plan ??? I mean since it has been acquired by xyz have they maintained its as it was prior to acquisition
We have a DB that is an export of data from our prod DB. This is designed for users to read prod data in a near real time environment. In this DB we have multiple schemas. The app data lives in a schema, let's call it APP. The users need the ability to create views, stored procedures, tables, etc in their own schema, let's call it USER. How can I grant the users access to create items in the USER schema but not the APP schema? I started by trying to grant them CONTROL of their schema, but CONTROL does not include the create permission.
I've read various answers on stack exchange, but none of them are working. If I grant CREATE PROCEDURE it will grant that to the entire database. How can I grant this to just the USER schema? I've read some post talking about changing the owner of the schema... that may be something worth looking into more.
Longer term I'm working to give the users their own database where they can have full control of all schemas in that DB and then perform cross database queries to the read only secondary which will simplify this setup.
Hello.
I installed CU 17 on a test instance of SQL Server 2022 and now it fails to start with Event ID 912 followed by 3417:
"Script level upgrade for database 'master' failed because upgrade step 'msdb110_upgrade.sql' encountered error 15173, state 1, severity 16. This is a serious error condition which might interfere with regular operation and the database will be taken offline. If the error happened during upgrade of the 'master' database, it will prevent the entire SQL Server instance from starting. Examine the previous errorlog entries for errors, take the appropriate corrective actions and re-start the database so that the script upgrade steps run to completion."
I have gone through the logs, found this:
"Error: 15173, Severity: 16, State: 1.
Server principal '##MS_PolicyEventProcessingLogin##' has granted one or more permission(s). Revoke the permission(s) before dropping the server principal."
and found an article suggesting it could be mapped to a user principal, but running the query to identify that user returned 'public'. I tried revoking those permissions as suggested but it didn't help.
Has anybody else seen this error and resolved it? I can successfully start the instance using the /T902 parameter.
Hey everyone. I am still fairly new (hence why I am having a beginner issue) to this and have created a schema with a few columns I wish to connect to my Visual Studio JS project. I have installed MySQL Workbench & SSMS.
From what I have researched, it seems first step is opening SSMS and establishing the connection that way, so I do that and the 'Connect to Server' popup appears and asks me for my Server name. This is one part I might be screwing up at, but I have tried everything that I think could be my server's name, with no avail. I have attached an image (image 1) of my server information which I think shows my server name, 'LocalMySQL92', but I could be wrong. I tried many different names and combination of names based on what I read online. All of them returned that same error (image 2) except for when I tried 'tcp:localhost,3306'. This one returned a different error message (image 3) that said the connection was actually successful before an error occured, but I have my doubts that a connection was actually established. There was also an option to browse for servers, but when I select that, it returned no servers, as if it couldn't detect any (shown in image 4). So that makes me question if I even have a server up and running...
I have also read that my server's access might be an issue and I read about the SQL configuration manager that is supposed to be within my MySQL folder in my C drive and can help with this by changing a couple lines. I have searched for the options I read to search (the file is called my.something, can't remember now) and looked all through these folders and the C drive for anything I think could possibly be the SSMS config manager, but cannot find that either :/ And I thought that was standard when I installed SSMS...
Anyway, I know this is a very beginner and bad question... I have been researching and doing as much as I could think of for the last 36hrs before looking for help this way.... But I am really struggling with this and not getting anywhere :/
Thank you so much for any light/assistance any of you can offer me here and thanks for reading. I very much appreciate it.
Image 1 server name & info
Image 2 most common server name error
Image 3 error I recieved when trying 'tcp:localhost,3306' as server name and said connection was successful before failing
Image 4 shows ne servers when I browse the 'Server name' field for servers, could this be a telling sign that I don't even have a server?
TL;DR: I cannot find my SQL server to connect to using SSMS. I wonder if it is me being unable to identify my server name or if I even actually have a server up. I have put in a lot of effort trying to figure this out, as figuring things out yourself is the best way to learn. But I'm really getting no where here and wasting so much time trying to figure this out.
Can someone point me to some documents on how to configure custom dashboard for sql on quest foglight
Migrating/upgrading our SQL server, which I haven't done in... 10 years at a former job.
Our SAN has enough space in SSD storage to move it all over, but I'm looking for best practice:
Details: not a giant shop. One and only SAN runs all our VM's on 3 hosts. 350 users. a few services being run from this SQL server. Mostly overgrown CRM type usage, but used constantly. Also some logging tools write but rarely read until we need to figure out why something went wrong etc)
We have a restaurant point of sale that uses Microsoft sql server as the database. I am seeking for the most simple and robust way to setup a machine to be a workhorse with minimal maintenance. We usually just buy dell optiplex and install windows 10 and just install sql server on there but I was wondering if I should install something like proxmox instead and host the sql server in a docker container or something or is that over complicating it.
Also if I installed windows bare metal on a n100 cpu with 32gb ddr4 do you guys think it would handle all our sql server hardware requirements fine or is that too little cpu power with the n100? The sql server traffic won’t be crazy imagine 10-15 clients reading and writing like a point of sale system at a large bar would.
Hello, hoping that someone can help me find the right product for an Azure SQL Server.
I can write SQL to build needed reports setting static date ranges via SSMS. Currently, I hand the queries off to an outsourced developer that writes Azure logic apps to run these queries at set intervals and mail an attached spreadsheet, but I am seeking something more scalable.
Essential Requirements:
-Be able to saves queries into some sort of platform and grant users access - if they have access, they can run the report.
-Solution would allow input of date ranges for the reports.
-Emailed reports would be excel attachments.
Nice to haves:
-If users are granted access to the report, they can subscribe - i.e. send me this report every week/month/quarter with X date ranges.
-Being able to format the report (such as font/background colors of top row, setting filtering on).
I used a product previously on an IBMi platform called "SQL ViewPoint". It allowed most of the above - I didn't necessarily love it, but it worked. I contacted them but they say it only works for IBMi. I've also tried the microsoft "Report Builder" - stood it up in a VM. Translating queries to reports is too cumbersome of a process to me.
Thanks in advance for any recommendations.
Hello,
I am working with a database that is approximately 4TB in size. Some time ago, we had to migrate the database from one disk unit to another. To perform this migration online, we decided to create new filegroups and took the opportunity to separate the tables by categories. Each category was assigned a new filegroup with the goal of leveraging all the benefits of having separate filegroups. Previously, everything was in a single filegroup.
The migration was successful without any issues. However, those benefits were never utilized. With the database's growth, we now face a situation where maintenance tasks, such as online index rebuilds, leave a lot of unused space in each filegroup. To quantify this, there are about 5 filegroups, each with approximately 150GB of unused space, totaling 750GB of wasted space. The goal is to reduce this unused space.
One alternative I considered is partitioning these large tables. This would significantly reduce the unused space, but the downside is the effort required to achieve this and the potential issues it might cause. I already have the largest table partitioned for years, so I have some knowledge in this area, although its structure made it easier to identify the correct key for partitioning.
An intermediate "solution" is to consolidate filegroups, but I am concerned this might introduce performance issues. The idea is not to return everything to a single filegroup, but to reduce the number of filegroups.
The database is running on SQL Server 2014 Enterprise, and the files are on a SAN Gold unit in Rackspace. The separation of filegroups was not done for performance reasons at the time but to take advantage of and distribute workloads during my early stages as a DBA.
What considerations should I have before deciding to consolidate filegroups? Are there any other alternatives to address this issue?
Thank you!
Edit 1: Additionally, this database is replicated on another server using log shipping, so we have another 750GB of unused space on that server as well
Hello All,
Ive made something of an error in my migration path. I had assumed that the Data Studio, i suppose by means of the Online naming used, would manage the backup and restore of the databases from On Prem to Azure, using a storage location as a proxy place to dump the files. Ive since been disavowed of that assumption, and am now distrustful of the Migrate extension.
I was hoping for some form of automation on this, that the Migrate extension would regularly keep a sync of the database from source to destination going until the cutover happens.
So now, i have taken a full backup, i have placed it in the blob, and Data Studio has gone from Restoring to "Ready for Cutover". Which is disconcerting. How exactly is this an online migration with minimal to no downtime? Whats happening to the transactions since the full backup?
It feels like quite the bait and switch, when i was prepared to manually "Backup, Restore, repoint all apps to new DB, test, confirm all working, shutdown original DB access".
Have i gone wrong somewhere?
We currently have hundreds of clients in a single SQL Server instance. Each client is in a separate database. All databases have the same structure. Management wants to merge all these databases into a single database. We would probably have to add TenantID to every table in order to do this and partition everything by TenantID. Has anyone done anything like this? Any advice will be welcome. Thanks.
EDIT: I share the concerns that many of you have expressed. I wanted to get a sanity check from this community to make sure that I was not off track. We have a meeting tomorrow with the application owner to discuss the reasoning behind this request and to understand just what problem they need solved. If you have links to any articles discussing this scenario, please post. Thanks for all the feedback. Much appreciated.
I have a situation where I have AGs that span from on-prem to Azure. Right now I have on-prem backups running to local NAS devices. These are not immutable. I want to get some immutable backups and as I already have replicas in the cloud, it would make sense to do it there. All my writes go through the on-prem replicas, and moving writes to Azure is not currently an option outside DR scenarios.
I've been looking into potential options.
Blob storage is out as the compressed backups are larger than the max size possible.
Other options I'm considering are backing up to a local VM disk and copying that to blob storage, but this doesn't scale well across multiple AGs and many servers. I'm also considering standing up a VM with a large disk and using that as a NAS target, then configuring a backup vault to take regular snapshots for immutability. Similarly, maybe Azure Files with a SMB share would do the same job.
For those of you taking large (> 20TB) backup in Azure, what's your solution?
I was investigating a SQL server what was having poor performance. The database has a lot of highly fragmented indexes. After rebuilding etc, performance went back to business as usual.
When I compare this SQL server with another SQL server running in a different customer site, I saw the same issue: highlt fragmented indexes.
How can I simple proof this fragmentation is causing the poor performance?
I have SQL Server SE is currently running on WS16. WS16 is a VM in Hyper-V. The host is WS22.
I have a new WS22 VM ready to go. I need to migrate SQL Server over to it.
These are the products from TechSoup to choose from:
https://www.techsoup.org/search/products/sql%20server/
On September 6, 2019 this is what TechSoup has listed that was purchased:
LVS-47430 - SQL Server Standard Edition, Server/CAL Licensing
LS-47547 - SQL Server User CAL
Question:
What should I be purchasing through TechSoup now?
NOTE: This is for a non-profit that has new management and no history of how the original purchase was made. I am doing my best to help untangle all this.
Thanks!
I'm trying to extract part of a field cell in a SQL table. An example string from this column/field is
Chain: 01234756789;Ext:123;Ext:456;Ext:789;0987654321;
The data I want is the '789' from the third Ext. This field denotes a telephone call chain from entering the system to being answered and transferred.
The system doesn't report inbound calls as seperate call legs (Annoyingly) so I was wondering If there is any way to check for and extract the third Ext number from these cells and add into their own column.
I work for a non profit and we are constricted with regards to our budget, we only have one big .bak file given to us by our vendor which is 95 gb, for this obviously the free version of the MS SQL server would not work because of the 10 gb limit. Is there a way for me to just divide the 95 gb database into smaller databases and just use it in the free version? If not what will be the best pricing model for us? I will be the only one using this SQL server on my PC just as a one big excel file to get data. Is MS SQL server a one time purchase or we have to pay monthly for it? I did some research online but it is quite confusing and wording they use seems vague to me.
I just restored a database on SQL server 2022 from a .bak file. This database was given to my org by our software vendor. I can see data in most of the tables when i select top 1000 rows but some of them, For example columnExplanation table show up just as empty. Could this be a permission issue? or the tables really are empty, I used WIndows authentication to connect to the database. I am fairly new to SQL server, Please ask clarifying questions if not clearly understood.
I have an asp .net core web app backed by SQL Server running on a PC running Windows Server 2022. I'm using entity framework core to talk to the DB. When I run my app out of Visual Studio 2022 using IIS Express everything works fine. However, if I add Docker support and run it in a linux container it fails when it tries to talk to the database. It gives me a collation error.
Cannot resolve the collation conflict between "Latin1_General_BIN2" and "SQL_Latin1_General_CP1_CI_AS" in the equal to operation.
I've checked the DB and the collation is consistent everywhere as "SQL_Latin1_General_CP1_CI_AS".
I tried adjusting the locale of the docker file and it had no effect:
RUN apt-get update; apt-get install -y locales; echo "en_US.UTF-8 UTF-8" > /etc/locale.gen; locale-gen en_US.UTF-8; update-locale LANG=en_US.UTF-8; rm -rf /var/lib/apt/lists/*
Oddly, changing to a windows container did not fix the issue either. It still complains of the collation issue.
Why would Docker cause a collation issue?
==EDIT - SOLVED ==
I figured it out. EF Core is the problem. I have this function. I added the null coalesce to userRoles and that fixed the problem.
public async Task<List<HomeTile>> GetMenuOptionsAsync(List<string> userRoles)
{
List<HomeTile> menuOptions = new List<HomeTile>();
userRoles = userRoles ?? new List<string>(); //This fixes the problem
try
{
var q = db.HomeTileRole.Where(htr => userRoles.Contains(htr.RoleId)).Select(htr => htr.HomeTileId).ToQueryString();
var authorizedHomeTileIds = await db.HomeTileRole.Where(htr => userRoles.Contains(htr.RoleId)).Select(htr => htr.HomeTileId).ToListAsync();
menuOptions = await db.HomeTile.Where(ht => authorizedHomeTileIds.Contains(ht.Id)).OrderBy(mo => mo.Ordinal).ToListAsync();
}
catch (Exception ex)
{
logger.LogError(ex, ex.Message);
}
return menuOptions;
}
If userRoles is null EF Core translates the query into:
SELECT [h].[HomeTileId]
FROM [cg].[HomeTile_Role] AS [h]
WHERE [h].[RoleId] IN (
SELECT [u].[value]
FROM OPENJSON(NULL) AS [u]
)
This causes the collation error.
If userRoles is empty then EF Core translates the query into:
DECLARE @__userRoles_0 nvarchar(4000) = N'[]';
SELECT [h].[HomeTileId]
FROM [cg].[HomeTile_Role] AS [h]
WHERE [h].[RoleId] IN (
SELECT [u].[value]
FROM OPENJSON(@__userRoles_0) WITH ([value] nvarchar(100) '$') AS [u]
)
And then everything is fine.