/r/DB2
A forum for discussion around International Business Machines (IBM) DB2 database platform. Questions and discussion around any of the Linux, Unix, Windows, or Mainframe platforms are welcome.
A sub-reddit dedicated to the administration and use of IBM's DB2 database platform on Unix, Linux, Windows, and Mainframe.
/r/DB2
I am trying to migrate some Db2 for z/OS tables to Db2 for LUW and I would like to maintain the same ordering as in the source format (EBCDIC).
Would anyone know what collation I should define the LUW database with?
I installed IBM DB2 Express C Version 10.5.4 and have been unable to use any commands in the Command line processor. The commands I've tried include 'connect to sample' and 'list database directory'. I get the following error message every time:
SQL1031N The database directory cannot be found on the indicated file system. SQLSTATE=58031
I was told that to check if the installation of DB2 was done correctly, I'd be able to test it using DBeaver, and so I did. As expected, I can't Test Connection using DBeaver either. I get the following error message:
SqlException
Some troubleshooting I've done myself include
None of the above seem to solve my issue at all. I'm currently using Windows 11. All of my classmates had no problem installing and running both DB2 and DBeaver.
One thing to not is I did a fresh install of Windows 11 when I got my Laptop 1 year ago, none of my classmates seems to have done that, so I wonder if that has anything to do with it, but I haven't been able to find the exact cause or the solution. Any help would be much appreciated. Thank you!!
I am trying to determine how I know which one of these I should query when looking for specific information?
For example I was told to retrieve column names I use SYSCAT.COLUMNS, but if I want to retrieve specific information about column properties I use SYSIBM.COLUMNS.
The only explanation I can get is roughly SYSCAT. has base layer information SYSIBM. contains lower-level system information, which doesn't really help seeing as how I don't know what constitutes "lower level system information". I don't see how there is a difference in the name of the column and the length of the column in regards to the type of data. Are they not both metadata? Is there a different way I should be looking at this?
Hello,
I'm trying to capture information about a failure and it has lead me down a bit of a rabbit hole.
The issue is that I have a field that should be unique and an application/hardware component that doesn't want to play nicely with troubleshooting the issue.
I explored setting the column to unique, but since NULL is considered a value for uniqueness, and NULL is a valid input, that wouldn't work. I can't use a pseudo-NULL value (such as matching another field value), because there are concerns that the data would be misused or incorrectly applied. I attempted to use a pseudo-NULL value but masking wouldn't work as it never returned a NULL value to the application (and doesn't solve the problem of using the field for a "proper" value).
The next step would be to block nonunique values. So I developed a trigger:
CREATE TRIGGER duplicateISBNtrigger
BEFORE
UPDATE OR INSERT
ON
library.books
REFERENCING NEW AS N
FOR EACH ROW
BEGIN
DECLARE ISBN_count INT;
-- Does the ISBN value already exist?
SET ISBN_count = (SELECT COUNT(*) FROM library.books WHERE ISBN_NUMBER = N.ISBN_NUMBER);
IF ISBN_count > 0 THEN
-- Log the alert
INSERT INTO library.books_errors (alert, inputData)
VALUES ('Error', N.ISBN_NUMBER, 'ISBN Already Exists');
-- Raise an exception to prevent the update
SIGNAL SQLSTATE '45123' SET MESSAGE_TEXT = 'This ISBN Already Exists.';
END IF;
END
This trigger actually works well enough: it rejects the duplicate value (we'll call it an ISBN), but it permits null values. However, if you look carefully you'll see that I'm also looking to log incidences.
This puts me in a chicken-egg scenario: if the SIGNAL line is on, then the error is generated as expected, however since it's technically an exception, it appears to roll back the transaction, which means the Insert is discarded. If I drop the signal, the error is logged, but it's silent and no updates are made.
What I'd like to do is to catch the scenario where the ISBN exists, log the details of the error, and to generate the error so that the insert isn't rolled back. I tried using EXECUTE IMMEDIATE...COMMIT, and explored isolation levels, but I've not been successful.
An AFTER UPDATE/INSERT trigger could be coded to revert the data, but then the SIGNAL executes, undoing the effect of the insert trigger (which would then store the incorrect ISBN number in this example)
Is this a possibility, or is this something that cannot be done?
Can anyone tell me the DATATYPEID column in syscat.sequences table in DB2 version 11.5.4?
Hi all!
Does anyone know if it is possible to have HADR between 2 Db2 servers with different licenses (for example: Standard edition for the Primary and Community for the Standby)?
I know SQL Server has a query to run to show slow queries, does DB2 have a query like such? Are there any tutorials on performance monitoring? I'm running 11.5.
Hi all, I come from the Oracle/SQL server world and am struggling with writing simple scripts which would use loop variables and iterate and apply dmls on a large table with commit points.
Struggling to implement the same within db2 and most suggested methods are to wrap them within stored procedures, but i don't want to write a sproc for each instance of my data updates.
Anyone has any examples i can look at?
Hey guys,
Quick question, not so much on DB2, but what are the typical Laws and Regulations that a Data Engineer needs to consider when working with Data, creating data pipelines, and databases for a business?
Is there a rough estimation how long that can take per GB of used space?
Is there any way to watch the progress?
When I log into db2 through putty I automatically have a process running: Bsh Bsh Bsh
How can I remove this from auto running ? TIA
I’m sorry, but what TS do? It looks like it just clustering tables. Database>TS>Tables?
I had a couple friends point out another article on getting Db2 to run on Apple Silicone. Had some back and forth with the author and it shows some process. Check out Kelly Rodger's blog - Db2 on Apple Silicone.
Hello I’m new to db2, and trying to learn how to write native stored procedures on db2 on ZOS.. so far no success.. trying a simple insert with default values with declaring variables… any examples or urls I can refer for learning coding?? TIA
Been chomping at the bit to announce this for a while now as I was part of the beta program. Amazon announced the release of AWS Relational Database Service for Db2 about an hour ago. There will be a few sessions and talks on the subject at AWS Re:Invent 2023 this week. Check out the blog article I wrote about my experience with the product - Datageek.Blog: AWS Relational Database Service for Db2.
Lookslike Db2 Fixpack 11.5.9.0 was released. Noticed an interesting EXPLAIN enhancement. Also looks like Pacemaker got a update as well.
Stumbled on a good article that I thought I would pass on. Stumbled onto it while browsing the Db2 Community Blogs. The author not only explains what db2move is, but he also provides a strategy on how to use it in a real-world scenario. You can check it out here - Streamlined Db2 Data Migration.
A co-worker of mine pointed out that Docker announced that Docker Desktop 4.25 has a GA release of a more mature and near native Rosetta for Linux. I know in the past that some people used the experimental version of this to get Db2 to run on Macbook ARM chipset with inconsistent success. I wonder if the GA release helped stabilize it. I have been holding on to my MacBook on Intel so I can't test. I would be curious if anyone here can test it. You can read about the release inthis blog.
Any of my fellow #Db2 friends going to AWS Re:Invent in Vegas this year? I'm going for my first time and am looking for friendly faces.
Hi all,
I need to create a batch script that executes a stored procedure. The .bat file will then be triggered by a Windows Task Scheduler on a daily basis at a particular time.
I saw a lot of SQL Server solutions for this problem, but only few for DB2. I am really new to DB2, so I am actually looking for a straightforward solution as I am stuck on this problem for a couple of days already.
Thank you.
Disclaimer, I'm not a DBA and I'm brand new to DB2 so sorry if these questions are basic :)
I've been tasked to speed up some queries we're running on our AS400, one of the things I was reading about making them more performant and read up on indexing our tables. I had some questions before I recommend that action and start.
Are primary keys indexed by default with DB2?
Related to 1, is there a difference between a "Unique key" and a primary one in DB2? I took a look around our database and found a table that listed all primary keys (sysibm.sqlprimarykeys) but none of the tables were querying slowly have columns listed. I was told that's because they're in qsys.qadbkfld. Quick search on IBM's support page says these are unique keys
Thanks!
Hi All,
I am trying to connect Tableau to IBM DB2. I have already downloaded the IBM DB2 Windows 64 bit driver package. According to this IBM article (https://www.ibm.com/docs/en/db2-warehouse?topic=database-connecting-db2), the next step is configuring the db2dsdriver.cfg for my database instance. However, this file does not exist within the driver package. There is a db2dsdriver.xsd file. Has anyone connected Tableau to IBM DB2? If so, how did you overcome this obstacle? Any help is greatly appreciated!
I want my script to run overnight. It has to execute a query, save the data somehow, and change a few pieces of the code to get new data. Does anyone have experience with this?
Any suggestions to make the query or the program run faster??
Question:
Is it necessary if you put multiple AND condition but with the same condition, let me elaborate:
SELECT A.ID, B.ID, C.ID, D.ID FROM A B C D (tables) WHERE A.ID between 0 and 10000 B.IDbetween 0 and 10000 so on….
Hi, Its a bit of a long shot but looking for a copy of Net.Data V7.2 for Windows. It was a free add on but cannot find a copy of it anywhere. If anyone has a copy, it would be greatly appreciated.
I'm a user of https://github.com/pacman82/arrow-odbc, but I'm trying to contribute to this and the related Python wrapper. The idea is bidirectional data transfers betwen ODBC and the Apache Arrow memory format for tabular data. Apache Arrow is becoming a very popular way for data science libraries to interoperate.
I'd like DB2 support, but the project would need to use GitHub actions to run a test suite against DB2. It already has a test suite for SQL Server. Here's the lay of the land for an open-source project trying to support DB2, without using a JVM language.
Seriously, my impression after this is that IBM is going out of their way to make sure open-source projects give up on trying to support DB2, unless they are using a JVM language. But maybe I'm missing some great resources. Any ideas on how to support a DB2 test suite for an open-source project using ODBC?
Edit:
Thanks for the gold!
looking for implementing central key store for different applications and wanted help in finding out which central key stores ( for encryption master key ) are supported for DB2...thanks!!!
Hi All,
Want to ask anyone have experience on IBM DB2 Monitoring especially on memory processes , currently I'm running on elastic agent able to see high usage on db2sysc however possible further deep dive into what happening on db2sysc ?
understand that db2sysc is the process that runs the DB2 database for the Spectrum Protect server.
Hello,
Is there a way where we can load the EBCDIC data from Mainframe database to db2 luw database?
Also, if we can do the conversion of EBCDIC data format?
earlier, we used to catalog the mainframe database and simply do the load cursor for utf-8 data format. However, require advice on ebcdic data format.
Thanks in advance for the suggestions.
Hi everybody !
We have an AS400 server running an IBM db2 database. Unfortunately we don’t have documentation for this database and the guy responsible for it is not around anymore.
I would like to recreate documentation as complete as possible as we lack knowledge to be able to fully exploit this database.
My plan would be to do some predetermined actions on the AS 400 GUI and intercept the underlying queries in order to understand the database. Unfortunately my understanding is that the native tracing SQL does not log the READ request then it is of no use here.
This is why I come to you, would anyone no of a tool (native of not) that is able to intercept all SQL requests or if not of a packet intercepting tool that can be setup on AS400 ?
Thanks in advance !