/r/DB2
A forum for discussion around International Business Machines (IBM) DB2 database platform. Questions and discussion around any of the Linux, Unix, Windows, or Mainframe platforms are welcome.
A sub-reddit dedicated to the administration and use of IBM's DB2 database platform on Unix, Linux, Windows, and Mainframe.
/r/DB2
Is possible?
Wonder how everyone is protecting there HADR setups? What backup product are you using. How to you do your load operations?
I'm trying to load a CSV file to my database however this keeps showing up when I click the load button. 1st picture shows what i see when i click load in the button on the 2nd pic.
I made java generator which generate classes for table and dotnet DB2 dapper CRUD API. It also create Angular AgGrid GUI
i keep hitting a strange error in DB2 that i cant quite explain the occurence behind
The high level is, i have a functioning query with accurate results with no issues. When i create a CTE to capture a separate data point and join that subset of data into the main query, and i'm getting a date correction error kick back, stating that another datapoint, that isnt involved with this CTE, has a date error.
Heres a high level non-specific example of what i'm seeing:
WITH TEST AS (
SELECT ROW_NUMBER() OVER(PARTITION BY ID_COL, ORDER By DATE_COL DESC) as RN
,ID_COL
,DATE_COL
,INFO_COL
FROM DATABASE.TEST_DB
WHERE DATE_COL = 'Some date Here'
)
SELECT *
,TDB.INFO_COL
,TDB.DATE_COL
,CASE
WHEN ODB.DATE_COL IS NOT NULL THEN ODB.DATE_COL + 1 MONTH
ELSE NULL
END AS "TEST_COLUMN"
FROM DATABASE.MAIN_DB AS MDB
LEFT JOIN TEST AS TDB
ON MDB.ID_COL = TDB.ID_COL
LEFT JOIN DATABASE.OTHER_DB AS ODB
ON MDB.ID_COL = ODB.ID_COL
WHERE MDB.DATE_COL >= 'date here'
It will throw an error, stating that a date conversion for a non-date occurred. previously, this example had no issues without said CTE being included, but including the CTE throws an error whenever the test_column case statement is included.
Im assuming somehow someone got a nonstandard date back into the database which is causing this, however I'm stumped, as this data set is extremely controlled, and shouldnt be able to get a non-date into any of these tables, and when i try to hunt for it, im unable to see it.
Any ideas?
worth noting i can port this basically 1:1 over to SSMS and run this against a Sqlserver duplicate database i'm maintaining right now as a sandbox, and it will work with no issues.
Hi everyone,
I'm currenting working through a Coursera Database Engineering course and I'm looking at a "Hands on Lab" of IBM Db2 on Cloud. I'm running a query 'SELECT * FROM SYSIBM.SYSTABLES;' and the UI is only returning one result. There's a little prompt saying "Truncated Number of Records:1" and when I run the mouse over it, it says
"The result set is truncated and only the first 1 rows are shown. You can increase the maximum available size of result sets in the Options window to load more results, or choose to export the full results to a local file."
I have maxxed out everything I can in the options (next to the Run all button) and it does nothing. Where is this truncating option?
Hello all,
At the following link it states that the length limit for index size is "1022 or storage":
https://www.ibm.com/docs/en/db2/11.5?topic=sql-xml-limits
|| || |Maximum length of a variable index key part (in bytes) |1022 or storage|
I am trying to find how I can set a larger max value in "Storage". I looked at the available settings in the CREATE TABLESPACE command and the CREATE STOGROUP command but I do not see anything that looks like it allows me to bump up this value.
I am using large tablespace for this item. Does anyone know how to use "storage" to increase the length? Thank you!
Context...
Large organisation running db2 LUW 11.5 with a 4.5TB database, running on an AWS Ec2 instance. HADR (Standby and Auxiliary), system online 24/7, CLI access only, no GUI.
We are trying to avoid the time, cost and technical implications of a blue/green deployment while migrating from a red hat 7 server to a red hat 8 server.
I had the thought of possibly stopping the database engine on server A, detaching the attached volume with the working database and reattach to server B.
Is this a possibility and can it be done quickly? I appreciate the Linux/AWS components are fairly straightforward but is it simple enough to point the engine to the new drive/database?
I was wondering if it was possible during an Import to set hardcoded values to some columns?
In my file I have colum A, B and C. In target table I have column A, B, C and D but D is NOTNULL, so a simple import insert/replace will fail because nothing is added to column D.
Is there a way to import my file into my table by adding a value into the column D at the same time?
I know the table could have a default value on column D to avoid it, but my problem is that's currently not the case and I want to avoid the delay of waiting for the DBA to setup all this, so I am wondering if there is another way purely via coding.
Thanks.
I am an oracle DBA with some SQL server knowledge too. At my workplace, they have DB2 Databases running on windows. They pay a contractor to manage these, but want my team to start picking up support. My company has offered to pay for training, but I’m struggling find training providers who offer DB2 Admin training. Even IBM don’t seem to be running courses through their supplier. Where is the best place to start?
Good morning,
I'm having issues figuring out this SQL statement.
So this is a SQL statement we have running in RPGLE and it is clearly setting a variable to the result of a procedure but I can't find the location of that procedure to see what it's comparing against. It looks like it's a stored procedure but when I go to schemas, there is no ORDERLIB in Schemas. It's not a program either because it's name is too long and I don't see any aliasing. So I was hoping someone might know what this is and maybe some steps to attempt to track down the answer.
Edit:
These are the only libraries that appear under schemas.
Edit again:
So I found the location of the procedure object, however, I don't know how to edit it. I can't seem to find a source file for it.
I need to create a very simple E(T)L process where i Export data using DEL format from ServerA.DB_A.SCH.TAB, move that over to ServerB then Import it into ServerB.DB_B.SCH.TAB.
DB_A.SCH.TAB and DB_B.SCH.TAB are identical, DB_B side were created by the db2look output for DB_A side, column definitions etc. are the same.
Environmental, dbm and database level configs like CODEPAGE(1208), CODE SET(UTF-8) and REGION are also identical. DB2 11.5 on Windows.
Still there are some scenarios, when source data contains values in VARCHAR(50) columns that is rejected at Import, and after looking into it it turns out because the values are too long.
It looks like it's because of non-ASCII characters like á,é,ű etc. it doesn't fit the 50 bytes becuase the length itself is almost already the limit, and as i change these characters manually to a, e... the Import is successful.
Since at some point the data somwehow fit into the source table there must be a way to load it into the destination with the same structure.
Any ideas on how to approach this any further?
As it currently stands the preferred format is still DEL, no option to use any ETL tool, the goal is to get this done with DB2 native tools, SQL, and PowerShell for automation later.
Cheers!
I'm trying to pivot data so that I make F2 from my source table my key into my output table and the data to be the concatenation of the keys that are from my source table. Is this possible in DB2? See example
HI,
we have 2 different server and we have a procedure what is working on one of the servers and not working on the other one.
The procedure:
input parameter: P_PARAM1
there is a select in the procedure where we use a condition like:
WHERE
((P_PARAM1 IS NULL AND NAME_COLUMN IS NULL) OR P_PARAM1 = NAME_COLUMN)
if I change this condition to:
((P_PARAM1 IS NULL AND NAME_COLUMN IS NULL) OR (P_PARAM1 = NAME_COLUMN AND P_PARAM1 IS NOT NULL))
this condition is matching well both of the servers.
Do you have any idea which setting can cause this differences ?
I am trying to migrate some Db2 for z/OS tables to Db2 for LUW and I would like to maintain the same ordering as in the source format (EBCDIC).
Would anyone know what collation I should define the LUW database with?
I installed IBM DB2 Express C Version 10.5.4 and have been unable to use any commands in the Command line processor. The commands I've tried include 'connect to sample' and 'list database directory'. I get the following error message every time:
SQL1031N The database directory cannot be found on the indicated file system. SQLSTATE=58031
I was told that to check if the installation of DB2 was done correctly, I'd be able to test it using DBeaver, and so I did. As expected, I can't Test Connection using DBeaver either. I get the following error message:
SqlException
Some troubleshooting I've done myself include
None of the above seem to solve my issue at all. I'm currently using Windows 11. All of my classmates had no problem installing and running both DB2 and DBeaver.
One thing to not is I did a fresh install of Windows 11 when I got my Laptop 1 year ago, none of my classmates seems to have done that, so I wonder if that has anything to do with it, but I haven't been able to find the exact cause or the solution. Any help would be much appreciated. Thank you!!
I am trying to determine how I know which one of these I should query when looking for specific information?
For example I was told to retrieve column names I use SYSCAT.COLUMNS, but if I want to retrieve specific information about column properties I use SYSIBM.COLUMNS.
The only explanation I can get is roughly SYSCAT. has base layer information SYSIBM. contains lower-level system information, which doesn't really help seeing as how I don't know what constitutes "lower level system information". I don't see how there is a difference in the name of the column and the length of the column in regards to the type of data. Are they not both metadata? Is there a different way I should be looking at this?
Hello,
I'm trying to capture information about a failure and it has lead me down a bit of a rabbit hole.
The issue is that I have a field that should be unique and an application/hardware component that doesn't want to play nicely with troubleshooting the issue.
I explored setting the column to unique, but since NULL is considered a value for uniqueness, and NULL is a valid input, that wouldn't work. I can't use a pseudo-NULL value (such as matching another field value), because there are concerns that the data would be misused or incorrectly applied. I attempted to use a pseudo-NULL value but masking wouldn't work as it never returned a NULL value to the application (and doesn't solve the problem of using the field for a "proper" value).
The next step would be to block nonunique values. So I developed a trigger:
CREATE TRIGGER duplicateISBNtrigger
BEFORE
UPDATE OR INSERT
ON
library.books
REFERENCING NEW AS N
FOR EACH ROW
BEGIN
DECLARE ISBN_count INT;
-- Does the ISBN value already exist?
SET ISBN_count = (SELECT COUNT(*) FROM library.books WHERE ISBN_NUMBER = N.ISBN_NUMBER);
IF ISBN_count > 0 THEN
-- Log the alert
INSERT INTO library.books_errors (alert, inputData)
VALUES ('Error', N.ISBN_NUMBER, 'ISBN Already Exists');
-- Raise an exception to prevent the update
SIGNAL SQLSTATE '45123' SET MESSAGE_TEXT = 'This ISBN Already Exists.';
END IF;
END
This trigger actually works well enough: it rejects the duplicate value (we'll call it an ISBN), but it permits null values. However, if you look carefully you'll see that I'm also looking to log incidences.
This puts me in a chicken-egg scenario: if the SIGNAL line is on, then the error is generated as expected, however since it's technically an exception, it appears to roll back the transaction, which means the Insert is discarded. If I drop the signal, the error is logged, but it's silent and no updates are made.
What I'd like to do is to catch the scenario where the ISBN exists, log the details of the error, and to generate the error so that the insert isn't rolled back. I tried using EXECUTE IMMEDIATE...COMMIT, and explored isolation levels, but I've not been successful.
An AFTER UPDATE/INSERT trigger could be coded to revert the data, but then the SIGNAL executes, undoing the effect of the insert trigger (which would then store the incorrect ISBN number in this example)
Is this a possibility, or is this something that cannot be done?
Can anyone tell me the DATATYPEID column in syscat.sequences table in DB2 version 11.5.4?
Hi all!
Does anyone know if it is possible to have HADR between 2 Db2 servers with different licenses (for example: Standard edition for the Primary and Community for the Standby)?
I know SQL Server has a query to run to show slow queries, does DB2 have a query like such? Are there any tutorials on performance monitoring? I'm running 11.5.
Hi all, I come from the Oracle/SQL server world and am struggling with writing simple scripts which would use loop variables and iterate and apply dmls on a large table with commit points.
Struggling to implement the same within db2 and most suggested methods are to wrap them within stored procedures, but i don't want to write a sproc for each instance of my data updates.
Anyone has any examples i can look at?
Hey guys,
Quick question, not so much on DB2, but what are the typical Laws and Regulations that a Data Engineer needs to consider when working with Data, creating data pipelines, and databases for a business?
Is there a rough estimation how long that can take per GB of used space?
Is there any way to watch the progress?
When I log into db2 through putty I automatically have a process running: Bsh Bsh Bsh
How can I remove this from auto running ? TIA
I’m sorry, but what TS do? It looks like it just clustering tables. Database>TS>Tables?
I had a couple friends point out another article on getting Db2 to run on Apple Silicone. Had some back and forth with the author and it shows some process. Check out Kelly Rodger's blog - Db2 on Apple Silicone.
Hello I’m new to db2, and trying to learn how to write native stored procedures on db2 on ZOS.. so far no success.. trying a simple insert with default values with declaring variables… any examples or urls I can refer for learning coding?? TIA
Been chomping at the bit to announce this for a while now as I was part of the beta program. Amazon announced the release of AWS Relational Database Service for Db2 about an hour ago. There will be a few sessions and talks on the subject at AWS Re:Invent 2023 this week. Check out the blog article I wrote about my experience with the product - Datageek.Blog: AWS Relational Database Service for Db2.