-
-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support different number of fraction digits for RFC3339 time format #9283
Support different number of fraction digits for RFC3339 time format #9283
Conversation
I'd like to hear opinions about the default behaviour ( |
IIRC the reason for the default behaviour was that artifically created time values like
That seems very strange. A typical system clock should usually have microsecond resolution. |
Yes, I meant I have microseconds, but no nanoseconds ( |
Yeah, that's to be expected. Actual nanosecond resolution seems to be rare. But it doesn't matter in this regard, because even microsecond resolution makse |
I know... what I'm saying is that it's plainly wrong to change the output format depending on the number of nanoseconds. We must choose a default. Could be nanoseconds, but if it's rare to have that level of precision, let's use microseconds. Or let's make it dependent on the platform. |
I just went ahead and change the default to microseconds. We could make it platform dependent, but for portability might be better to have always the same value. Microseconds is probably enough for most applications anyway. If we want the previous behaviour, we could add a precision field that stores up to how many digits of nanoseconds are reliable. This same approach is used by Elixir for example. Although I remember some headaches because a value with precision 0 is not equal to another one with higher precision but 0 microseconds. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Microseconds precision is good! 👍
I don't think microseconds by default is a good choice. It's worse than the status quo. |
Is the disconnect maybe whether you see this as a format for machines to consume or for humans to read? |
Actually in Ruby the default is to display no decimals. We could go that way too. I don't think the current status is better. If you see a value without decimals you don't know if they were just not printed (due to formatting decision) or is actually zero nanoseconds. And the same happens with any number of decimals. |
I'm also thinking, if one days the computers have more precisions, should we go ahead and change the format to show more decimal digits? At which point you will be confident that no decimals were cut? |
That's true but it's obvious that this could be a deliberate choice. That's much less visible for microseconds. @asterite I'm pretty sure we can safely assume nanoseconds to be the maximum precision used by general purpose computer systems for a long time. |
Added `fraction_digits` parameter to `Time#to_rfc3339` and `Time#to_rfc3339(IO)`.
c899e21
to
4cc2b44
Compare
I just pushed the change to make it work like in Ruby, with no decimals by default. For YAML, because it might be used as a serialization format, it's now always using 9 digits. In the future we could add a |
This makes possible to select 0, 3, 6 or 9 decimal digits in RFC3339 time format.
Also I added the optional
fraction_digits
parameter toTime#to_rfc3339
andTime#to_rfc3339(IO)
Something I still feel weird is that by default (
fraction_digits = nil
) the output will print all the decimals or no decimals at all depending on the time value. I know it might be really unlikely thatnanoseconds == 0
but still I think the output format should be always the same.Maybe the default could be something in between, like printing microseconds?