-
Notifications
You must be signed in to change notification settings - Fork 93
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
dasdseq unable to get huge data set (DSNTYPE=LARGE or DSNTYPE=EXTENDED) #602
Comments
I cannot understand your problem report. Can you explain your issue a bit more clearly please? Do you have a repro? Can you provide detailed step-by-step instructions for how someone could easily reproduce the problem? Can you at least provide us with your Hercules and guest (MVS? z/OS?) logs that shows the problem occurring? If you want our help, you need to help us! See: "SUBMITTING PROBLEM REPORTS". Steps 4 and 8 are the ones I'm mostly interested in right now, but all of the other information would be helpful as well. Remember: it is never harmful to provide more information than needed, but is almost always harmful to not provide enough information. |
Thank you very much for your answer! I hope, my description of the problem was pretty clear, but I see I'm wrong, sorry for that. Here is step-by-step instruction:
Either of the following jobs may be used for that: //IEBDG EXEC PGM=IEBDG //SYSPRINT DD SYSOUT=* //LARGE DD DSNAME=LARGE.DATASET,UNIT=3390,DISP=(,KEEP), // DCB=(RECFM=FB,LRECL=80,BLKSIZE=8000), // SPACE=(CYL,(10000,10000)),VOLUME=SER=111111, // DSNTYPE=LARGE //SYSIN DD * DSD OUTPUT=(LARGE) REPEAT QUANTITY=65000,CREATE=1 CREATE QUANTITY=10000,FILL=X'FF' END // Alternatively, you could dump a large volume using //ADRDSSU EXEC PGM=ADRDSSU,REGION=0M //SYSPRINT DD SYSOUT=* //Z24U01 DD DISP=OLD,VOL=SER=Z24U01,UNIT=3390 //LARGE DD DSN=LARGE.DATASET,DISP=(,CATLG), // SPACE=(CYL,(10000,10000)),DSNTYPE=LARGE, // UNIT=3390,VOL=SER=111111 //SYSIN DD * DUMP DATASET( - INCLUDE(**)) - INDD(SYSRES1) - OUTDD(LARGE) - SHARE TOL(ENQF) ALLEXCP // Before proceeding to the next step, please first verify that the data set that was created really is "LARGE" (i.e. occupies more that 65535 tracks).
In my first post I tried to explain why this truncation happens. Thank you in advance [and have Merry Christmas and Happy New Year!] |
Thank you, Gregory! That helps a lot. But unfortunately not enough. In step 2, you say:
How do I do that? I know very little about z/OS! I can use z/OS a little bit, but I am extremely inexperienced with it! Should I do a ISPF function 3.4? (to get a list of datasets on that volume?) Then what? How to I display the size of the dataset? Is there a prefix command I can use? ("I" for "Information" or something??) You have to understand, Gregory, that I am a Hercules developer, not a z/OS developer! Also, on the second example JOB, I see that your ADRDSSU Finally, can you tell me what your Thanks! |
Thank you for a quick answer!
Absolutely correct. Just use ISPF 3.4, then "I" prefix command against interested data set. You'll see there both "allocated tracks" as well as "used tracks". "used tracks" shows current space utilization and should be > 65535.
Sure. However, you should use existing volume, (presumably system residence volume), which contains lot of data.
I also tried Thanks! |
(Re: typo):
Thanks! I'll start looking into this bug right away. |
Can you give me the name of a dataset on a z/OS system residence volume that I should use? (so I don't have to look for one?) Thanks. |
My 2nd (alternative) job above specifies I've checked it and discovered that I'm wrong with my space calculations, and so it generated many records which I didn't required. The updated (corrected) version is below:
This job writes 65000000 records (65000 * 1000), which consumes 108334 tracks, which is enough. On my home system (Intel i5 760, 2.80Ghz, windows 7, Hyperion 4.6.0 SDL, z/OS 2.4) this job ran 7 minutes. If you have non-SMS volumes available, you can specify VOL=SER on LARGE DD. On my home z/OS, there are no non-SMS volumes (all volumes are SMS-managed), so I've replaced P.S. Size of this data sets in bytes is 80 x 1000 x 6500 = 5 200 000 000. P.P.S. After running |
Gregory (or anyone else!): another Quick Question: How can you tell whether a dataset is normal, large or extended? I'm guessing there is a flag somewhere in the DSCB? (Remember, I know nothing about z/OS or MVS!) |
Exactly, it can be recognized by flag in DSCB F1: 61(X'3D') DS1FLAG1 Flag byte .... 1... DS1LARGE Large format data set. 78(X'4E') DS1SMSFG System managed storage indicators. .... .1.. DS1STRP Sequential extended-format data set. See https://www.ibm.com/docs/en/zos/2.3.0?topic=dscbs-how-found for other details "Large" data set may exists on both SMS-managed and non SMS-manged volumes, but "extended" data set may exists on SMS-managed volume only. |
I suppose all what we need to know to correctly process all kind of sequential data set are the following DSCB F1 fields: 61(X'3D') DS1FLAG1(1) Flag byte. .... 1... (X'08') DS1LARGE Large format data set. 78(X'4E') DS1SMSFG(1) System managed storage indicators. .... .1.. (X'04') DS1STRP Sequential extended-format data set. 82(X'52') DS1DSORG(2) Data set organization. 0100 000. (X'FE'=X'40') DS1DSGPS Physical sequential (PS) organization. (I guess, 'dasdseq' already check this field) 98(X'62') DS1LSTAR(3) Last used track and block on track (TTR). 101(X'65') DS1TRBAL(2) Space remaining on last track. If NOT extended format, this is the value from TRKCALC indicating space remaining on last track used. For extended format data sets this is the high order two bytes of the four-byte TTTT last used track number. (see DS1LSTAR.) Zero for VSAM, PDSE, and HFS. 104(X'68') DS1TTTHI(1) High order byte of track number in DS1LSTAR. The first (high-order) byte of the TTT last block pointer for "large" data set. Valid only if DS1LARGE is on. |
THANK YOU, Gregory! Much appreciated. Please check me on this. I want to be sure I'm understanding things correctly. Are the following comments correct/accurate? /* PROGRAMMING NOTE
For NORMAL datasets, the last block pointer (TTR) is in the
DS1LSTAR field. So the size of the dataset in number of tracks
is in the first two bytes (TT) of DS1LSTAR. (The last right-
most byte of DS1LSTAR being the 'R' part of the 'TTR'.)
For DSNTYPE=LARGE however, the size of the dataset is 3 bytes
in size (i.e. TTT, not just TT). So you use the first two
high-order bytes of DS1LSTAR (just like you do for for normal
format datasets), but in addition to that, the high-order byte
of the 3-byte TTT is kept in the DS1TTTHI field.
For DSNTYPE=EXTENDED, the size of the dataset in tracks is of
course 4 bytes in size (TTTT), with the low-order 2 bytes of
that 4-byte TTTT coming from the high-order two bytes of the
DS1LSTAR field (just for for normal/large format datasets),
but the two HIGH-order bytes of the 4-byte TTTT is in DS1TRBAL.
SUMMARY OF DATASET SIZE IN NUMBER OF TRACKS:
Normal: TT = high-order 2 bytes of ds1lstar
Large: TTT = ds1ttthi(1), high-order 2 bytes of ds1lstar
Extended: TTTT = ds1trbal(2), high-order 2 bytes of ds1lstar
*/ Yes? |
excellent! |
HELP! I've made the necessary changes to Attached are the output listings of my 3 attempts. They all failed:
In my 2nd attempt, I tried using Finally, in my 3rd attempt, I changed my output
So I'm completely stuck and need Help! (Remember, I know almost nothing about z/OS!!) |
Hello. This is zipped image of 3390-27 disk with 2 dat sets: LARGE1.DATASET and LARGE2.DATASET. The 1st data set is a result of IEBDG job shown above - 5 200 000 000 bytes x'FF'. The 2nd data set created with slightly different job;
All records here are different: the first 15 positions contains a sequential number. Secondly, about your failed jobs: the 1st and 2nd failures are quite clear. The 3rd was caused by missed space. I do not know, what C5USR1 volume is, which type/model? You need 3390-27 to allocate 150000 tracks, or at least, empty 3390-9 (3390-9 has 10,017 cylinders e.g. 150255 tracks. |
It might be quite clear to YOU! But it isn't clear to me!
Where? I do not see any missed space.
Volume
A CCKDMAP -i "Q:/CCKD64/zOS 2.5c (ADCD)/c5usr1.cckd64" 12:14:48.482 CCKDMAP -i c5usr1.cckd64 started; process-id = 8748 (0x0000222C) 12:14:48.636 HHC02499I Hercules utility CCKDMAP - Compressed dasd file map - version 4.7.0.11047-SDL-DEV-g8a6cfd87-modified 12:14:48.636 HHC01414I (C) Copyright 1999-2023 by Roger Bowler, Jan Jaeger, and others 12:14:48.637 HHC01417I ** The SDL 4.x Hyperion version of Hercules ** 12:14:48.637 HHC01415I Build date: Jan 15 2024 at 12:27:40 12:14:48.637 HHC03020I 12:14:48.637 HHC03021I CCKDMAP of: "Q:/CCKD64/zOS 2.5c (ADCD)/c5usr1.cckd64" 12:14:48.640 HHC03007I File size: (2,424,016 bytes) 12:14:48.647 HHC03022I 12:14:48.647 HHC03022I dh_devid: CKD_C064 (64-bit CCKD64 base image) 12:14:48.647 HHC03022I dh_heads: 15 12:14:48.647 HHC03022I dh_trksize: 56832 12:14:48.648 HHC03022I dh_devtyp: 0x90 (3390-9) 12:14:48.648 HHC03022I dh_fileseq: 0x00 12:14:48.648 HHC03022I dh_highcyl: 0 12:14:48.648 HHC03022I dh_serial: 278896508142 12:14:48.648 HHC03023I 12:14:48.648 HHC03023I cdh_vrm: 0.3.1 12:14:48.648 HHC03023I cdh_opts: 0x40 12:14:48.648 HHC03023I num_L1tab: 587 12:14:48.648 HHC03023I num_L2tab: 256 12:14:48.649 HHC03023I cdh_cyls: 10017 (150,255 tracks) 12:14:48.649 HHC03023I cdh_size: 0x000024FCD0 (2,424,016 bytes) 12:14:48.649 HHC03023I cdh_used: 0x000024FCD0 (2,424,016 bytes) 12:14:48.649 HHC03023I free_off: 0x0000000000 (old format) 12:14:48.649 HHC03023I free_total: 0x0000000000 (0 bytes) 12:14:48.650 HHC03023I free_largest: 0x0000000000 (0 bytes) 12:14:48.650 HHC03023I free_num: 0 12:14:48.650 HHC03023I free_imbed: 0 12:14:48.650 HHC03023I cdh_nullfmt: 0 (ha r0 EOF) 12:14:48.650 HHC03023I cmp_algo: 1 (zlib) 12:14:48.650 HHC03023I cmp_parm: -1 (default) 12:14:48.722 HHC03020I 12:14:48.723 HHC03043I Total active tracks = 75 tracks 12:14:48.723 HHC03044I Avg. L2-to-track seek = 0.011 MB 12:14:48.723 HHC03020I 12:14:48.733 CCKDMAP -i c5usr1.cckd64 ended; rc=0 As you can see, the volume is a 3390-9 with 150,255 total tracks, of which only 75 tracks are in use. Thus it has over 150,000 tracks not in use. I would think that would have been plenty. BUT NONE OF THAT IS IMPORTANT NOW, since you have kindly provided me with a copy of the exact same volume you were trying to unload with I will try my Please Stand By...... |
Hi, Are you sure that nothing else allocated on the Anyway, 150000 track is very close to limit, so you can try reduce space to But this is not correct, because Thank you very much for your time spent! |
DONE! It appears to be working perfectly! The proof is below: "aaa.txt" is the output of the old/current broken version of dasdseq. As you can see, in the old version, it shows "startrack(42797)" and "track(A72D)" on the "DS1LSTAR" messages, and "25678400" for the "Records written" message. (I added the "verbose 1 1" option to the dasdseq command.) With the new fixed version, it shows "startrack(108333)" and "track(1A72D)" on the "DS1LSTAR" messages, and "65000000" for the "Records written" message. Additionally, you can also see in the new version a new To be extra thorough, I also visually inspected the "large2.dataset" output file with a hex editor too, and verified that the last record in the "large2.dataset" output file had "65000000" as its sequence number. So everything looks good to me! So I've committed the fix. For reference, the commit hash is f2bbab8 Feel free to verify the fix for yourself if you wish. THANK YOU for your help, Gregory! I could not have done it without your kind help! Closing issue. |
Closes GitHub Issue #602. THANK YOU to Gregory ("GregoryTwin") for the help!
I don't know. But it's all moot. I was able to complete my testing using your "111111.cckd" dasd image.
Unnecessary. I was able to complete my testing using your "111111.cckd" dasd image.
In the hands of someone like you who knows what they're doing? I'm sure it is! In my hands though? Not so much. But again, it doesn't matter. The point is moot. I was able to complete my testing using the "111111.cckd" dasd image that you kindly uploaded. Thank you for that! Much appreciated. |
Great! |
dasdseq
determines the end of a dataset based onDS1LSTAR
. However, forDSNTYPE=LARGE
,DS1LSTAR
contains just the 2 low order bytes of the last block pointer. (The high order byte is kept onDS1TTTHI
, offset 104 = x'68'.) And forDSNTYPE=EXTENDED
, the first 2 bytes of the last block pointer is kept inDS1TRBAL
.dasdseq
takes neither of these into consideration, and as a result, attempts to offload such data sets withdasdseq
completes with RC=0, but results in an output file that is incomplete (truncated), i.e. does not contain all of the data set's data.The text was updated successfully, but these errors were encountered: