Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Expose the IPVS firewall mark as a label #764

Closed
wants to merge 3 commits into from

Conversation

dsolsona
Copy link

When using IVPS with firewall marks it is interesting to know to which virtual service a backend belongs.

This PR exposes the firewall mark as a label. I've tested this in my local prometheus setup and works as expected. I haven't tested this in a non fwmark setup but the tests cover that use case.

@SuperQ @discordianfish

@SuperQ
Copy link
Member

SuperQ commented Dec 17, 2017

It would be good to have a working example in the fixtures.

@discordianfish
Copy link
Member

Agree, in general this looks useful. Thanks! But yeah some fixtures would be great!

@dsolsona
Copy link
Author

I've added the new fixtures to include a use case with fw marks support.

Let me know if there's anything else required.

@SuperQ
Copy link
Member

SuperQ commented Dec 17, 2017

After reading over the IPVS procfs code, and trying to understand the IPVS documentation a bit, I'm not sure this is what we need to do to handle this.

@dsolsona
Copy link
Author

@SuperQ what do you mean exactly?

The changes to collector/fixtures/proc/net/ip_vs are based on what was added in https://github.com/prometheus/procfs/pull/48/files and also on a real example I got from our infrastructure

ipvsadm output

~# ipvsadm -L -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
FWM  1 wrr
  -> 10.91.70.51:80               Tunnel  100    1          1
  -> 10.91.70.52:80               Tunnel  100    2          1
FWM  100 wrr
  -> 10.90.153.167:443            Tunnel  100    4          5
  -> 10.91.89.34:443              Tunnel  100    4          4
FWM  110 wrr
  -> 10.90.153.172:443            Tunnel  100    0          0
  -> 10.90.153.173:443            Tunnel  100    0          0

proc output

~# cat /proc/net/ip_vs
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port Forward Weight ActiveConn InActConn
FWM  00000001 wrr
  -> 0A5B4634:0050      Tunnel  100    3          2
  -> 0A5B4633:0050      Tunnel  100    3          2
FWM  00000064 wrr
  -> 0A5A99A7:01BB      Tunnel  100    3          6
  -> 0A5B5922:01BB      Tunnel  100    4          4
FWM  0000006E wrr
  -> 0A5A99AD:01BB      Tunnel  100    0          0
  -> 0A5A99AC:01BB      Tunnel  100    0          0

The node_exporter built from this branch

~# curl -s localhost:9100/metrics |grep -i ipvs
node_exporter_build_info{branch="ipvs_fwmark",goversion="go1.9.2",revision="fde756d93cc8ba23b9e88be9cc3076948e7313b0",version="0.15.0"} 1
# HELP node_ipvs_backend_connections_active The current active connections by local and remote address.
# TYPE node_ipvs_backend_connections_active gauge
node_ipvs_backend_connections_active{local_address="<nil>",local_mark="00000001",local_port="0",proto="FWM",remote_address="10.91.70.51",remote_port="80"} 2
node_ipvs_backend_connections_active{local_address="<nil>",local_mark="00000001",local_port="0",proto="FWM",remote_address="10.91.70.52",remote_port="80"} 3
node_ipvs_backend_connections_active{local_address="<nil>",local_mark="00000064",local_port="0",proto="FWM",remote_address="10.90.153.167",remote_port="443"} 3
node_ipvs_backend_connections_active{local_address="<nil>",local_mark="00000064",local_port="0",proto="FWM",remote_address="10.91.89.34",remote_port="443"} 4
node_ipvs_backend_connections_active{local_address="<nil>",local_mark="0000006E",local_port="0",proto="FWM",remote_address="10.90.153.172",remote_port="443"} 0
node_ipvs_backend_connections_active{local_address="<nil>",local_mark="0000006E",local_port="0",proto="FWM",remote_address="10.90.153.173",remote_port="443"} 0

Am I missing anything?

@SuperQ
Copy link
Member

SuperQ commented Dec 17, 2017

What I'm thinking is that it doesn't seem like the local marks have any association with a local address. Maybe it would be easier to simply map the mark label into the address label?

@dsolsona
Copy link
Author

The way I understand it is that those are two different things. In TCP/UDP you have a local address which maps to an IP:PORT combo (or just a single IP). When defining a fwmark service, the concept of local address doesn't exist, you just configure an integer to identify which fwmark to use.

It might seem a bit a weird to re-use the local_address field for something entirely different.

@SuperQ
Copy link
Member

SuperQ commented Dec 17, 2017

Yes, it's tricky. One option is we make these a new metric with different labeling.

For sure, one thing we need to fix is to map the localAddress nil to empty string.

@dsolsona
Copy link
Author

I agree with fixing the localAddress being nil, I could try to fix this in the procfs repo.

As for the new metric, I'm not entirely sure it would make sense to have a new metric, it is in fact the same metric :), perhaps renaming the label from local_mark to just fwmark might make more sense.

Copy link
Member

@discordianfish discordianfish left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since it's still tracked as active connection, using the same metric is fine IMO. I also think it's okay to have localDress be empty for these cases, I'd say this isn't that uncommon.
Fixing the in procfs would be nice but IMO no need to block this merge.

@dsolsona
Copy link
Author

dsolsona commented Jan 8, 2018

Thanks @discordianfish!

I'm glad we can get this merged.

@discordianfish
Copy link
Member

@SuperQ You're okay with merging this now?

@dsolsona
Copy link
Author

dsolsona commented Feb 7, 2018

🙏 I'd like to have this merged

@discordianfish
Copy link
Member

@SuperQ We should decide what to do here. I'm happy to leave this final say in this to you but we should decide something :)

@SuperQ
Copy link
Member

SuperQ commented Feb 12, 2018

Yes, sorry, I've been distracted.

This PR still needs to fix the local_address value nil problem.

node_ipvs_backend_connections_active{local_address="192.168.0.57",local_mark="",local_port="3306",proto="TCP",remote_address="192.168.50.21",remote_port="3306"} 1498
node_ipvs_backend_connections_active{local_address="192.168.0.57",local_mark="",local_port="3306",proto="TCP",remote_address="192.168.82.21",remote_port="3306"} 1499
node_ipvs_backend_connections_active{local_address="192.168.0.57",local_mark="",local_port="3306",proto="TCP",remote_address="192.168.84.22",remote_port="3306"} 0
node_ipvs_backend_connections_active{local_address="<nil>",local_mark="10001000",local_port="0",proto="FWM",remote_address="192.168.49.32",remote_port="3306"} 321
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This needs to be local_address="", not <nil>

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm willing to address this issue, but I believe procfs returns net.IP instance, and this package is invoking .String() method which returns <nil>. If it's okay I can do another PR that will mitigate this issue in current package? Thanks!

@discordianfish
Copy link
Member

@dsolsona Can you fix this? I think then we can merge this.

@discordianfish
Copy link
Member

@dsolsona Bump.
@SuperQ I assume you'd be okay with merging this once this is fixed, right?

@SuperQ
Copy link
Member

SuperQ commented Apr 17, 2018

Yes, I think we just need to fix up the labeling issue, and it'll be fine.

@discordianfish
Copy link
Member

I'm going to close this for now due to lack of activity. Feel free to re-open again once y ou revive this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants