Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

E2 configuration to connect gNB to Near-RT RIC #964

Open
hududed opened this issue Dec 2, 2024 · 5 comments
Open

E2 configuration to connect gNB to Near-RT RIC #964

hududed opened this issue Dec 2, 2024 · 5 comments

Comments

@hududed
Copy link

hududed commented Dec 2, 2024

I am trying to get ORAN Near-RT RIC to connect to the gnb according to the xDevSM guide all on a single host. I have successes in running virtual zmq setup open5gs, srsgNB and srsUE before so I just need to setup the RIC. I didnt setup RIC Framework explicitly using ORAN SC RIC or FlexRIC as shown in the srsRAN guide but I have the RIC framework setup and having trouble with configuring the E2 interface with srsRAN gnb. I wasn't sure who to pose this issue to but the xDevSM folks seem to direct it to the srsRAN folks (see response).

So I have open5gs running, and the RIC framework running (I think)

❯ kubectl get po -n ricplt -owide
NAME                                                         READY   STATUS    RESTARTS       AGE    IP            NODE       NOMINATED NODE   READINESS GATES
deployment-ricplt-a1mediator-7777b44fbb-s5d9z                1/1     Running   0              3h   10.244.0.29   minikube   <none>           <none>
deployment-ricplt-alarmmanager-565bb46876-f565f              1/1     Running   0              3h   10.244.0.34   minikube   <none>           <none>
deployment-ricplt-appmgr-75bbdddf49-64pnj                    1/1     Running   0              3h   10.244.0.26   minikube   <none>           <none>
deployment-ricplt-e2mgr-5d6c5c9955-rj6zr                     1/1     Running   0              3h   10.244.0.28   minikube   <none>           <none>
deployment-ricplt-e2term-alpha-5f8cd66b4c-5jpnh              1/1     Running   0              3h   10.244.0.30   minikube   <none>           <none>
deployment-ricplt-o1mediator-5cc465ff78-f5c48                1/1     Running   0              3h   10.244.0.33   minikube   <none>           <none>
deployment-ricplt-rtmgr-584d6db494-tvnnm                     1/1     Running   3 (3h ago)   3h   10.244.0.27   minikube   <none>           <none>
deployment-ricplt-submgr-58d48c6d79-qk4ll                    1/1     Running   0              3h   10.244.0.31   minikube   <none>           <none>
deployment-ricplt-vespamgr-78d4db4bdb-mpvrm                  1/1     Running   0              3h   10.244.0.32   minikube   <none>           <none>
r4-infrastructure-kong-b6bdd88df-vg8mz                       2/2     Running   0              3h   10.244.0.23   minikube   <none>           <none>
r4-infrastructure-prometheus-alertmanager-5fcfd578f4-xpmp8   2/2     Running   0              3h   10.244.0.22   minikube   <none>           <none>
r4-infrastructure-prometheus-server-cf6cb746c-nt222          1/1     Running   0              3h   10.244.0.21   minikube   <none>           <none>
statefulset-ricplt-dbaas-server-0                            1/1     Running   0              3h   10.244.0.24   minikube   <none>           <none>

❯ kubectl get services -n ricplt
NAME                                        TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                         AGE
aux-entry                                   ClusterIP      10.109.41.88     <none>        80/TCP,443/TCP                  5h37m
r4-infrastructure-kong-manager              NodePort       10.110.244.185   <none>        8002:31319/TCP,8445:32222/TCP   5h37m
r4-infrastructure-kong-proxy                LoadBalancer   10.99.143.241    <pending>     80:32080/TCP,443:32443/TCP      5h37m
r4-infrastructure-kong-validation-webhook   ClusterIP      10.110.68.115    <none>        443/TCP                         5h37m
r4-infrastructure-prometheus-alertmanager   ClusterIP      10.98.52.220     <none>        80/TCP                          5h37m
r4-infrastructure-prometheus-server         ClusterIP      10.108.58.143    <none>        80/TCP                          5h37m
sctp-service                                NodePort       10.110.146.13    <none>        36422:31657/SCTP                5h31m
service-ricplt-a1mediator-http              ClusterIP      10.108.115.111   <none>        10000/TCP                       5h36m
service-ricplt-a1mediator-rmr               ClusterIP      10.111.218.204   <none>        4561/TCP,4562/TCP               5h36m
service-ricplt-alarmmanager-http            ClusterIP      10.96.74.42      <none>        8080/TCP                        5h36m
service-ricplt-alarmmanager-rmr             ClusterIP      10.111.76.70     <none>        4560/TCP,4561/TCP               5h36m
service-ricplt-appmgr-http                  ClusterIP      10.110.148.171   <none>        8080/TCP                        5h37m
service-ricplt-appmgr-rmr                   ClusterIP      10.98.160.116    <none>        4561/TCP,4560/TCP               5h37m
service-ricplt-dbaas-tcp                    ClusterIP      None             <none>        6379/TCP                        5h37m
service-ricplt-e2mgr-http                   ClusterIP      10.105.56.241    <none>        3800/TCP                        5h37m
service-ricplt-e2mgr-rmr                    ClusterIP      10.101.94.134    <none>        4561/TCP,3801/TCP               5h37m
service-ricplt-e2term-prometheus-alpha      ClusterIP      10.105.218.223   <none>        8088/TCP                        5h36m
service-ricplt-e2term-rmr-alpha             ClusterIP      10.106.235.100   <none>        4561/TCP,38000/TCP              5h36m
service-ricplt-o1mediator-http              ClusterIP      10.103.182.57    <none>        9001/TCP,8080/TCP,3000/TCP      5h36m
service-ricplt-o1mediator-tcp-netconf       NodePort       10.105.171.254   <none>        830:30830/TCP                   5h36m
service-ricplt-rtmgr-http                   ClusterIP      10.109.210.127   <none>        3800/TCP                        5h37m
service-ricplt-rtmgr-rmr                    ClusterIP      10.109.154.41    <none>        4561/TCP,4560/TCP               5h37m
service-ricplt-submgr-http                  ClusterIP      None             <none>        3800/TCP                        5h36m
service-ricplt-submgr-rmr                   ClusterIP      None             <none>        4560/TCP,4561/TCP               5h36m
service-ricplt-vespamgr-http                ClusterIP      10.108.181.139   <none>        8080/TCP,9095/TCP               5h36m

And want to connect gnb to the RIC - so I adapted the gnb e2 yaml file in the srsRAN guide for my gnb setup gnb_zmq_e2.yaml and ran:

❯ sudo gnb -c gnb_zmq_e2.yaml e2 --addr="10.244.0.30" --bind-addr="127.0.0.5"
INI was not able to parse cu_cp.amf.++
Run with --help for more information.

Is the gnb config provided in the srsRAN guide correct? The 127.0.0.5 is ip for my amf component, but which is the correct --addr to start the gNB? I have only taken those associated with e2term alpha.

@pgawlowicz
Copy link
Collaborator

pgawlowicz commented Dec 2, 2024

You need to pull the newest srsRAN version, the amf config section was moved to cu_cp section recently.

@pgawlowicz
Copy link
Collaborator

@hududed any update?

@hududed
Copy link
Author

hududed commented Dec 5, 2024

@pgawlowicz sorry I haven't tried it yet I'm in the middle of migrating to a new system. Makes sense and will get back on this next week.

@hududed
Copy link
Author

hududed commented Dec 24, 2024

@pgawlowicz Sorry for the delay! So I reinstalled the 24.10 version (hopefully that was latest). the cu_cp error seems to be gone, but now:

❯ sudo gnb -c configs/gnb_zmq_xapp.yaml e2 --addr=10.101.199.166

--== srsRAN gNB (commit e5d5b44b9) ==--

INI was not able to parse pcap.e2ap_filename
Run with --help for more information.

gnb_zmq_xapp.txt

@alexandre-huff
Copy link

Hi @hududed
The parameter named was replaced to e2ap_du_filename.
Please, replace the "e2ap_filename" by "e2ap_du_filename" in the pcap section.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants