You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Parker would count specificity 22 for the first one and 21 for the second. And then it would write out:
Top Selector Specificity: 22
Top Selector Specificity Selector: w v u t s r q p o n m l k j i h g f d c b a
But, clearly, the second selector has higher specificity!
The problem is Number object transforms the string into a base 10 number. Whereas the above mentioned specification precisely states that:
Concatenating the three numbers a-b-c (in a number system with a large base) gives the specificity.
Proposed solutions
There are two solutions I am aware of:
Base 16 like
Use letters to represent digits larger that 9, forming a sequence: 0-9a-zA-Z. That is the first selector in the example would get specificity n. Base 64 uses almost the same approach (with different order).
This would only move the problem to base 62, which would be more usable and would be (IMHO) enough in the vast majority of cases.
Three separated numbers
Show the result as three (dot?) separated numbers, making the first selector specificity 0.0.22. This reintroduces the way CSS2 deal with this. Such solution is vivid and complete.
It would however introduce a question how to count specificity per selector and related metrics. Counting average for each part would resolve this. Maybe it would even carry more information, as I have no idea what Specificity Per Selector: 9.48506151142355 truly represents.
The text was updated successfully, but these errors were encountered:
So what if we could represent specificity like this: { a: 1, b: 0, c: 0 }? This would mean that we have a selector that has a single id-selector, like #my-selector. For displaying purposes we could use the proposed 0.0.22 notation, but for comparisons or averages I would use the object notation, because I think
{a: 0.493,b: 2.509,c: 1.208}
is easier to comprehend than 0.493,2.509,1.208. It's also more clear for further analysis as sorting specificities would be done by first sorting a, then b and then c.
Problem
When counting the specificity of a selector Parker does:
correctly as specified by Selectors Level 3: 9. Calculating a selector's specificity.
Then it concats the thee numbers into a string which it then converts into an integer using Number object.
I feel this is fundamentally wrong approach.
Example
The result of such is that having the following CSS:
Parker would count specificity
22
for the first one and21
for the second. And then it would write out:But, clearly, the second selector has higher specificity!
The problem is Number object transforms the string into a base 10 number. Whereas the above mentioned specification precisely states that:
Proposed solutions
There are two solutions I am aware of:
Base 16 like
Use letters to represent digits larger that 9, forming a sequence:
0-9a-zA-Z
. That is the first selector in the example would get specificityn
. Base 64 uses almost the same approach (with different order).This would only move the problem to base 62, which would be more usable and would be (IMHO) enough in the vast majority of cases.
Three separated numbers
Show the result as three (dot?) separated numbers, making the first selector specificity
0.0.22
. This reintroduces the way CSS2 deal with this. Such solution is vivid and complete.It would however introduce a question how to count specificity per selector and related metrics. Counting average for each part would resolve this. Maybe it would even carry more information, as I have no idea what
Specificity Per Selector: 9.48506151142355
truly represents.The text was updated successfully, but these errors were encountered: