-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
udffsck 1.00-beta #7
base: master
Are you sure you want to change the base?
Conversation
.travis.yml
Outdated
- cd cmocka-1.1.0 | ||
- mkdir build | ||
- cd build | ||
- case "$CC" in | ||
"tcc") cmake -DCMAKE_INSTALL_PREFIX=$PTH -DCMAKE_BUILD_TYPE=Release -DCMAKE_C_COMPILER=$(which gcc) ../ ;; | ||
"tcc") cmake -DCMAKE_INSTALL_PREFIX=$PTH -DCMAKE_BUILD_TYPE=Release -DWITH_STATIC_LIB=ON ../ ;; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You forgot to enable this -DWITH_STATIC_LIB=ON
flag for other configurations which have --static
flag in LDFLAGS. And so it failed on arm and ppc.
Ok, ARM compiles, unit tests are passing but then it get stuck on first real data. |
ad ARM: maybe there is similar infinity loop like that with getchar()? |
ad ARM: it should not. I guess there is something with access to data from qemu, but I'll check it. |
open/close/read/write/mmap should work fine in qemu. IIRC there were only problems with threads and some synchronizations. Another reason can be slowness. travis automatically kills jobs when there is no ouput on stdout/stderr for 10 minutes. Try to add some debugging output to stdout/stderr... For valgrind, just run |
Yeah, it is killed because it stalled. I'll try it locally to see what happens. Memory access was debugged using clang's address sanitizer, but I'll definatelly run Valgrind as well. |
…M runner. It runs locally but stalls on CI
Very well, ARM cmocka tests went crazy. It seems that forking mechanism is not working properly for some reason. Good thing is it works (more or less) so I guess I can now consider CI as up and running for udffsck. |
If you have some ARM board (e.g. Raspberry PI) you can try to run ARM test natively. To check if problem is in udffsck code or in qemu-arm. Or you can install full ARM based linux distribution and run it under qemu-system-arm. And for x86 (both gcc and tcc), you should run all tests under valgrind. Just call |
Ad ARM: I tried that but I got stuck in some strange error because raspbian GCC is not able to build C99 sources if I remember correctly. I guess I'll end with qemu-system-arm. Running all tests you mean running cmocka under valgrind, right? |
Ad tcc: that thing is producing hard-to-debug code because it ignore -O flags (and has no equivalent to it) and for some reason even when built with -g, gdb cannot access locals, not even specified function because it is probably unrolled somewhere. What I am seeing, it reorganise code in way that sometimes it variable incremented one times more than it should (seen at for cycles for example, comparison between tcc and gcc/clang). And to be honest, I have no idea why and since gdb is useless, I don't know how to debug either. Any ideas on that matter? |
Fyi, random increment is in file utils.c:209, variable 'line'. Needs to be built with -DHEXPRINT, run with this: ./udffsck ../../udf-samples/bs2048-r0201-dirty-file-tree.img -vvv |
Raspbian GCC must be able to compile C99 sources... I do not see reason why not. Otherwise tons of Debian packages would not be available on Raspberry PI, which is not truth. Do you see any error? |
What is the status of merging? This seems interesting (I guess this is related to this work: https://dspace.vutbr.cz/xmlui/bitstream/handle/11012/65230/vladyka.pdf?sequence=-1) |
Hi, yes, that diploma thesis is mine. Current status is following: it seems to be working somehow on both x86 and x86-64 linux using GCC or LLVM. TCC is work-in-progress as well as ARM support. Big endian architectures are out-of-scope right now. Fsck itself can right now do checks and fixing for UDF 2.01 or older. Newer versions are not supported yet. Remember it is still unreleased and rather untested fsck so I would not recommend it for production grade systems, but feel free to test it right now because planned chenges are only for compatibility support or bugfixes at this moment. |
Thanks for the quick update. I found your work by looking how to leverage dm-integrity not only for checksum mismatch discovery but also for repair if the same block is to be found on a different device e.g. in a RAID mirror or parity setup. I have a CD-image, that read correctly with dd, but the files were all unreadable. Do you think, such an image is a good test? Keep up the good work! |
I see. Anyway, regarding CD, if it has UDF on it, it can be interesting test. My guess is that filesystem will be beyond repair, but you can definitely try that. |
Just chipping in my test results here. I was testing a large UDF partition for Windows interoperability, which I placed my Fedora 28 home partition on. I used the format-udf tool to create the most compatible setup. Unfortunately, the Linux UDF driver seems to cause some issues that leads Windows to refuse to write to the partition, leaving it read-only after a couple days of use. I decided to give this tool a go, since Windows chkdsk wasn't much help and I had no other options. Alas, all it does it complain about LVID timestamps and then segfault. It also crashes valgrind, but at least the output of that is more useful; it's attached below. Running with any verbosity level above the default gives me infinite warnings of "Error marking block as used. It is already marked." for seemingly every block in the filesystem. The partition in question no longer exists, as I've moved on to other options for multiplatform filesystems (and in fact am still looking for a usable solution). However, I'm happy to offer more help in testing, though my time may be limited in the future. |
Would it be possible to release the checking part without the fixing part (if that's not ready yet), so you can at least see, if your partition is messed up? I also gladly help testing it. I'm currently moving my games from a smaller disk to a bigger UDF formatted one. I'm already having problems with corrupted files and whatnot, so I'd make a good test animal. ;) Oh, and whenever I let it "Fix SBD", it breaks the partition with:
|
just something I found yesterday (not as big as this pull request but something) |
Only two hearts for this? Come on. This is heavily underrated. And also disc authoring tools. I wish more people cared. Optical discs are the backbone of long-term archival. |
Thank you for kind words. Fact is my own life moved forward and even though I am still listening to crickets here, I haven't been working on this for years as well.Sad truth is optical media are dead, at least in mainstream. And as for long-term storage, I am not quite sure here either considering bit rot and other shanigans. Odesláno z mého zařízení Galaxy
-------- Původní zpráva --------Od: HT-7 ***@***.***> Datum: 07.02.25 23:42 (GMT+01:00) Komu: pali/udftools ***@***.***> Cc: Vojtech Vladyka ***@***.***>, Author ***@***.***> Předmět: Re: [pali/udftools] udffsck 1.00-beta (#7)
Only two hearts for this? Come on. This is heavily underrated.
I know this is late, but thank you for your efforts, argorain.
Sadly, UDF development has been left somewhat behind in the Unix/Linux world.
And also disc authoring tools. growisofs wasn't developed since 2008 (!), for example. It is full of unclosed bugs such as unicode characters breaking a directory in UDF and file names over 86 characters breaking merged sessions (-M) in Rock Ridge.
I wish more people cared. Optical discs are the backbone of long-term archival.
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you authored the thread.Message ID: ***@***.***>
|
UDF is still also interesting on HDDs/SSDs as a feature-rich format that ought to work across all 3 major OS (Win, Mac, Linux). New people coming from Windows are often on the lookout for a partition that works across both OS. But it's currently still very unreliable. So having something that can check for errors would be great. |
It's still on store shelves. People even buy CD-R and CD-RW nowadays. But indeed, it seems not enough people are aware of the overwhelming long-term archival benefits that no other media has. Here is a good video explaining it.
Bit rot is a much greater issue on flash storage and hard disks than optical discs. Good quality optical discs can easily last 20 years. I have CD-R from my childhood that are fully readable today. But I have had corrupt files on a low-quality flash drive after mere months of storage. The drive itself is still working fine, but the files have been damaged due to bit fading (memory transistors losing charge). These are so-called logical errors, not physical errors. Good-quality flash storage can also last well over a decade with no damaged data, but still has software-based points of failure like malware, think of ramsomware. Write-once optical discs are immune to that once written. Other media has points of failure like head crashes and memory controller failures that optical media is immune against, given its modular design. This means the disc and the drive are separate units. If a drive stops working, the disc can be inserted into a new drive. Most people don't seem to think long-term, but only care about the next shiny object. They think they can just dump everything on one metal disk and it will be around forever. Or even worse, their smartphone's internal memory. Wrong. Now that smartphone internal memories have grown by 10 times compared to a decade ago, it seems all too convenient to most people to just keep it in there and never store it anywhere else. But the day they lose access to their phones due to a borked update or backdoor (Louis Rossmann video), they realize how wrong it is. For long-term archival, there simply is no substitute for optical discs.
In fact, until 2019 (where the exFAT patents were graciously lifted), UDF was the only widely supported patent-free cross-platform file system without a 4 |
I just finished udffsck 1.00-beta. It is first open source implementation for Linux, which is capable of either check and fix UDF file system.
It is able to check and fixes UDF up to version 2.01, just like rest of package.