- Table of Contents
- Metrics Toolkit
The metrics toolkit (formerly the metrics accelerator/framework) is a Mule application intended to collect, aggregate and load platform metrics into different visualization systems; providing out of the box integrations and visualization options, including useful dashboards and charts. In addition to the platform metrics, the toolkit also extends the capabilities to integrate with external applications like Jira, Confluence, Jenkins, Bitbucket and Splunk to gather SDLC metrics. This is an UNLICENSED software, please review the considerations. If you need assistance on extending this application, contact your MuleSoft Customer Success representative or MuleSoft Professional Services
- Compact Mule application (1 single application)
- Provides more than 100 metrics from 3 complementary domains:
- Platform Operational Metrics: collected and calculated automatically based on multiple products from Anypoint Platform: Exchange, Design Center, Runtime Manager, Access Management; covering metrics from applications deployed on-prem (Standalone), RTF and CloudHub.
- Platform Benefits: require manual input to calculate final metrics, crossing information from the "Platform Operational" domain
- External SDLC Metrics: collected and calculated automatically based on multiple external applications: Jira, Confluence, Jenkins, Bitbucket and Splunk;
-
Poller (Push mode)
- Collects, transforms and loads metrics in a defined visualization system
- Configurable - frequency (cron expression and timezone ) and status (enabled/disabled).
-
API to manage the asset:
- API endpoint to obtain metrics on-demand (Pull mode)
- API endpoint for triggering a specific loader to push data to a visualization system
- API endpoint for loading platform benefits - manual input required
- CSV
- JSON
- Plain Log: in case you forward logs to external systems (e.g using Splunk forwarder)
- Splunk: Including an out of the box dashboard with more than 100 charts
- ELK: Including out of the box, basic, Kibana dashboards
- Anypoint Monitoring: Requires Titanium subscription, dashboard is not provided
- Embedded dashboard: Including an out of the box basic embedded dashboard accessed by running the application offering an UI with a number of metrics obtained
- Tableau: including an out of the box dashboard with current consolidated platform metrics
- MongoDB
- SFDC: The Salesforce Loader will load data into Tableau CRM, formerly known as Einstein Analytics via the Salesforce Analytics connector. Dashboards are not provided, and it is assumed users are familiar with Tableau CRM, data recipes, and Analytics Studio.
NOTES:
- some adjustments in Metrics Toolkit implementation may be required if the loader does not work as expected for your specific scenario.
- By default, CSV loader output uses comma as the separator char. In order to change the CSV output format, modify the output options in
loader-csv-build-structure.dwl
and/orloader-csv-build-benefits-structure.dwl
located undersrc/main/resources/dw/loader
directory. For more information about CSV output formatting options, check MuleSoft's official documentation.
Product | Metric | Dimensions | - |
---|---|---|---|
Access Management | Users Total | BG | |
Access Management | Active Users | BG | |
Access Management | Inactive Users | BG | |
Access Management | Active Users Last 60 days | BG | |
Access Management | Active Users Last 30 days | BG | |
Access Management | Environments Total | BG | |
Access Management | Environments Production Total | BG | |
Access Management | Environments Sandbox Total | BG | |
Access Management | API Client Applications Total | BG | |
Design Center | Assets Total | BG | |
Design Center | API Specs Total | BG | |
Design Center | Fragments Total | BG | |
Design Center | Flow Designer Apps Total | BG | |
Exchange | Assets Total | BG | |
Exchange | API Specs Total | BG | |
Exchange | Mule 3 Connectors Total | BG | |
Exchange | SOAP APIs Total | BG | |
Exchange | Fragments Total | BG | |
Exchange | HTTP Proxies Total | BG | |
Exchange | Policies Total | BG | |
Exchange | Extensions Total | BG | |
Exchange | Custom Assets Total | BG | |
Exchange | Overall Satisfaction | BG | |
Exchange | API Fragments reuse | BG | |
Exchange | API Spec reuse | BG | |
Exchange | Extensions reuse | BG | |
Exchange | API Specs managed in API Manager reuse | BG, Environment | |
Exchange | Policies applied in API Manager reuse | BG, Environment | |
API Manager | API Specs Managed Total | BG, Environment | |
API Manager | API Instances Total | BG, Environment | |
API Manager | API Instances Active Total | BG, Environment | |
API Manager | API Instances Inactive Total | BG, Environment | |
API Manager | API Instances Versions Total | BG, Environment | |
API Manager | API Instances With Policies Total | BG, Environment | |
API Manager | API Instances Without Policies Total | BG, Environment | |
API Manager | API Instances With Security Policies Total | BG, Environment | |
API Manager | API Instances Without Security Policies Total | BG, Environment | |
API Manager | API Instances With Contracts Total | BG, Environment | |
API Manager | API Instances Without Contracts Total | BG, Environment | |
API Manager | API Instances With More Than One Consumer Total | BG, Environment | |
API Manager | API Instances With One or More Consumers Total | BG, Environment | |
API Manager | API Contracts Total | BG, Environment | |
API Manager | Policies Used | BG, Environment | |
API Manager | Policies Used Total | BG, Environment | |
API Manager | Automated Policies Used | BG, Environment | |
API Manager | Automated Policies Used Total | BG, Environment | |
API Analytics | Transactions Last 30 days Total | BG, Environment | |
RuntimeManager - CloudHub - Networking | VPCs Total | BG | |
RuntimeManager - CloudHub - Networking | VPCs Available Total | BG | |
RuntimeManager - CloudHub - Networking | VPCs Used Total | BG | |
RuntimeManager - CloudHub - Networking | VPNs Total | BG | |
RuntimeManager - CloudHub - Networking | VPNs Available Total | BG | |
RuntimeManager - CloudHub - Networking | VPNs Used Total | BG | |
RuntimeManager - CloudHub - Networking | DLBs Total | BG | |
RuntimeManager - CloudHub - Networking | DLBs Available Total | BG | |
RuntimeManager - CloudHub - Networking | DLBs Used Total | BG | |
RuntimeManager - CloudHub - Networking | Static IPs Total | BG | |
RuntimeManager - CloudHub - Networking | Static IPs Available Total | BG | |
RuntimeManager - CloudHub - Networking | Static IPs Used Total | BG | |
RuntimeManager - CloudHub - Applications | vCores Total | BG, Environment | |
RuntimeManager - CloudHub - Applications | vCores Available Total | BG, Environment | |
RuntimeManager - CloudHub - Applications | vCores Used Total | BG, Environment | |
RuntimeManager - CloudHub - Applications | Applications Total | BG, Environment | |
RuntimeManager - CloudHub - Applications | Applications Started Total | BG, Environment | |
RuntimeManager - CloudHub - Applications | Applications Stopped Total | BG, Environment | |
RuntimeManager - CloudHub - Applications | Runtime Versions Used | BG, Environment | |
RuntimeManager - CloudHub - Applications | Runtime Versions Used Total | BG, Environment | |
RuntimeManager - RTF - Capacity | Fabrics Total | BG | |
RuntimeManager - RTF - Capacity | Workers Total | BG | |
RuntimeManager - RTF - Capacity | Controllers Total | BG | |
RuntimeManager - RTF - Capacity | Cores Allocated Total | BG | |
RuntimeManager - RTF - Capacity | Memory Allocated Total | BG | |
RuntimeManager - RTF - Capacity | Cores Allocated Per Fabric Average | BG | |
RuntimeManager - RTF - Capacity | Memory Allocated Per Fabric Average | BG | |
RuntimeManager - RTF - Applications | Cores Allocated Total | BG, Environment | |
RuntimeManager - RTF - Applications | Memory Allocated Total | BG, Environment | |
RuntimeManager - RTF - Applications | Applications Total | BG, Environment | |
RuntimeManager - RTF - Applications | Applications Started Total | BG, Environment | |
RuntimeManager - RTF - Applications | Applications Stopped Total | BG, Environment | |
RuntimeManager - RTF - Applications | Runtime Versions Used | BG, Environment | |
RuntimeManager - RTF - Applications | Runtime Versions Used Total | BG, Environment | |
RuntimeManager - Standalone | Mule Servers Total | BG, Environment | |
RuntimeManager - Standalone | Mule Clusters Total | BG, Environment | |
RuntimeManager - Standalone | Mule Server Groups Total | BG, Environment | |
RuntimeManager - Standalone | Mule Applications Total | BG, Environment | |
RuntimeManager - Standalone | Mule Applications Started Total | BG, Environment | |
RuntimeManager - Standalone | Mule Applications Stopped Total | BG, Environment | |
RuntimeManager - Standalone | Mule Runtime Versions | BG, Environment | |
RuntimeManager - Standalone | Mule Runtime Versions Total | BG, Environment | |
MQ | Queues total | BG, Environment, Region | |
MQ | FIFO Queues total | BG, Environment, Region | |
MQ | Queues In Flight Messages total | BG, Environment, Region | |
MQ | Queues Received Messages total | BG, Environment, Region | |
MQ | Queues Sent Messages total | BG, Environment, Region | |
MQ | Queues ACK Messages total | BG, Environment, Region | |
MQ | Exchanges total | BG, Environment, Region | |
MQ | Exchanges Published Messages total | BG, Environment, Region | |
MQ | Exchanges Delivered Messages total | BG, Environment, Region | |
OSv2 | ObjectStore V2 Request Count | BG, Environment |
These metrics require manual inputs
Name | Dimensions |
---|---|
Developer Productivity | BG |
Platform Benefits | BG |
Savings From API Reuse | BG |
Savings From Maintenance Productivity | BG |
Savings From Reuse in Maintenance | BG |
Total Savings | BG |
These metrics are optional and can be cherry picked as per your requirement
Name | Metric |
---|---|
BitBucket | Total Number of BitBucket Repositories |
Confluence | Total Number of Confluence pages |
Confluence | Total Number of Confluence pages created in the last 30 days |
Confluence | Total Number of Confluence pages updated in the last 30 days |
Confluence | Top Contributors in the last 30 days and associated number of pages created |
Jenkins | Total Number of Jenkins jobs |
Jenkins | Total Number of failed Jenkins jobs |
Jenkins | Total Number of successful Jenkins jobs |
Jenkins | Total Number of unexecuted Jenkins jobs |
Jira | Total Number of Jira stories in the backlog |
Jira | Total Number of Jira stories in the current sprint |
Jira | Jira stories categorized by type and associated count in the current sprint |
Jira | Jira stories categorized by status and associated count in the current sprint |
Splunk | Total Number of Splunk dashboards |
- Mule Runtime 4.2.2 or above
- All deployments models are supported: CloudHub, OnPrem hosted Runtimes, Runtime Fabric
- Anypoint Platform credentials - Two options are supported:
- Anypoint Platform user with the Organization Administrator role, and CloudHub Admin role (or specific permissions - see connected app section). Both roles should be provided in the Master and all Sub Orgs for which you wish to gather data. The CloudHub Admin role is environment specific - therefore should be granted for each environment in each business group.
- A Connected App (client credentials) with the following scopes (make sure to include all Sub Orgs and all environments you want to collect data):
- Design Center
- Design Center Developer
- Exchange
- Exchange Viewer
- Runtime Manager
- Cloudhub Network Viewer
- Read Alerts
- Read Applications
- Read Servers
- Runtime Fabric
- Manage Runtime Fabrics
- API Manager
- View APIs Configuration
- View Contracts
- View Policies
- General
- Profile
- View Environment
- View Organization
- Design Center
- (Optional for SDLC metrics) Authorized user with API access to any of the applications: Jira, Confluence, Jenkins, Bitbucket and Splunk for which you want to gather data.
-
Clone or download the project from GitHub
git clone git@github.com:mulesoft-catalyst/metrics-toolkit.git
-
Adjust the properties, run the project and test it - go to your browser and open
http://localhost:8081/console/
-
Use the postman collection provided (/postman) to understand the API. The postman collection contains the following requests:
-
Platform Metrics:
- GET Platform Metrics: retrieves plaform metrics
- POST Platform Metrics - Load - Splunk Strategy: used to load platform metrics to Splunk. For more information, see Splunk steps
- POST Platform Metrics - Load - Tableau Strategy: used to load platform metrics to Tableau. For more information, see Tableau steps
- POST Platform Metrics - Load - CSV Strategy: returns platform metrics in CSV format.
- POST Platform Metrics - Load - JSON Strategy: returns business metrics in JSON format.
-
Business Metrics:
- GET Benefits: retrieves business metrics showing the benefits of using the platform
- POST Benefits - Load - Splunk Strategy: used to load business metrics to Splunk. For more information, see Splunk steps
- POST Benefits - Load - JSON Strategy: returns business metrics in JSON format.
- If you want to run the application using the poller mode, you have to configure some properties
- Default configurations defined in
/src/main/resources/properties/app-{env}.yaml
: - Make sure to encrypt all sensitive data using the Secure Properties Module: https://docs.mulesoft.com/mule-runtime/4.2/secure-configuration-properties.
- Default secure config file defined in
/src/main/resources/properties/secure/app-{env}.yaml
- Example
mule.key
used and configured as a Global Property underglobal.xml
file
Name | Description | Default Value |
---|---|---|
http.port | The port for exposing the metrics-toolkit API | 8081 |
poller.enabled | Property to enable or disable the poller to collect and load metrics in external systems | false |
poller.frequency.cron | Defines the exact frequency (using cron-expressions) to trigger the execution: Recommended to collect metrics once a day | 0 0 0 * * ? * |
poller.frequency.timezone | Defines the time zone in which the cron-expression will be efective | GMT-3 |
aggregation.raw | Flag to define the format of the final response False: Won’t provide the raw data but final metrics True: Will provide raw data to be aggregated outside this asset | false |
collectors | Comma separated set of collectors that should be executed. Default value: all. Possible values available for all deployment models: core (Core Services) ap (Automated Policies) apc (API Clients) apm (API Manager) arm (Standalone Runtimes) dc (Design Center) ex (Exchange). The following collectors are not available for PCE: amq (Anypoint MQ) apma (API Manager Analytics) ch (Cloudhub) rtf (Runtime Fabric) | osv2 (Object Store V2) |
loader.strategy | In the case of using the poller, this property defines the strategy for loading data in external systems, the options are: csv, json, logger, splunk, am, elk, tableau, mongodb | logger |
anypoint.platform.host | Anypoint Platform Host. Change to eu1.anypoint.mulesoft.com if using the EU Control Plane or to a private host if using PCE | anypoint.mulesoft.com |
auth.mode | Authentication mode. Valid options are: platform-credentials or connected-app-credentials | platform-credentials |
auth.username | Anypoint Platform username. Used when auth.mode is platform-credentials | |
auth.password | Anypoint Platform password. Used when auth.mode is platform-credentials | |
auth.clientId | Anypoint Platform Connected App Client Id. Used when auth.mode is connected-app-credentials | |
auth.clientSecret | Anypoint Platform Connected App Client Secret. Used when auth.mode is connected-app-credentials | |
auth.orgId | Anypoint Platform master org Id | |
ignoreLists.organizations | An array (comma-separated values) of Anypoint Platform sub-organizations id that will be ignored while retrieving metrics e.g "cdfa4e7d-47cd-n1h1-8f39-6a73fbb9ffcb, cdfa4e7d-47cd-n2h2-8f39-6a73fbb9ffcb" | |
ignoreLists.environments | An array (comma-separated values) of Anypoint Platform environments id that will be ignored while retrieving metrics e.g "cdfa4e7d-47cd-n1h1-8f39-6a73fbb9ffcb, cdfa4e7d-47cd-n2h2-8f39-6a73fbb9ffcb" | |
api.securityPolicies | A list of security policies IDs that are applied within the organisation. Should be updated to ensure accuracy of the APIs with/without security metrics | client-id-enforcement,ip-allowlist,ip-blocklist,jwt-validation,ldap-authentication,openidconnect-access-token-enforcement,external-oauth2-access-token-enforcement,http-basic-authentication |
Name | Description | Default Value |
---|---|---|
sdlc.confluence.enabled | Property to enable or disable application to collect metrics from Confluence | false |
sdlc.confluence.host | Confluence server host | |
sdlc.confluence.port | Confluence server port | |
sdlc.confluence.path | Context url of the Confluence REST API | |
sdlc.confluence.user | Authorized Confluence user to access REST APIs | |
sdlc.confluence.token | User token to access REST APIs | |
sdlc.bitbucket.enabled | Property to enable or disable application to collect metrics from Bitbucket | false |
sdlc.bitbucket.host | Bitbucket server host | |
sdlc.bitbucket.port | Bitbucket server port | |
sdlc.bitbucket.path | Context url of the Bitbucket REST API | |
sdlc.bitbucket.user | Authorized Bitbucket user to access REST APIs | |
sdlc.bitbucket.token | User token to access REST APIs | |
sdlc.jira.enabled | Property to enable or disable application to collect metrics from Jira | false |
sdlc.jira.host | Jira server host | |
sdlc.jira.port | Jira server port | |
sdlc.jira.path | Context url of the Jira REST API | |
sdlc.jira.backlogPath | Context url of the Jira REST API to fetch stories from backlog | |
sdlc.jira.user | Authorized Jira user to access REST APIs | |
sdlc.jira.token | User token to access REST APIs | |
sdlc.jenkins.enabled | Property to enable or disable application to collect metrics from Jenkins | false |
sdlc.jenkins.host | Jenkins server host | |
sdlc.jenkins.port | Jenkins server port | |
sdlc.jenkins.path | Context url of the Jenkins REST API | |
sdlc.jenkins.user | Authorized Jenkins user to access REST APIs | |
sdlc.jenkins.token | User token to access REST APIs | |
sdlc.splunk.enabled | Property to enable or disable application to collect metrics from Splunk | false |
sdlc.splunk.host | Splunk server host | |
sdlc.splunk.port | Splunk server port | |
sdlc.splunk.path | Context url of the Splunk REST API | |
sdlc.splunk.user | Authorized Splunk user to access REST APIs | |
sdlc.splunk.password | Password to access REST APIs |
NOTE: Please note that each external system collector should be self-contained, it means that all associated configuration must be part of the Mule configuration file itself and must not be externalized inside the global.xml
A Postman collection which contains sample API requests is included under /postman/postman_collection.json. To import this, open Postman and click File > Import, and select the .json file. The collection contains documentation describing prerequisites and setup. Clicking the collection folder opens the documentation window. A number of Environment variables are needed. A template environment is also included under /postman/environment_template.json. To import into Postman, click Environemts > Import and select the .json file. Populate the variables with accurate values.
- Create 2 indexes: metrics and platform_benefits (of type Events)
- In the Splunk instance configure an HTTP Event Collector (HEC) associated to these 2 indexes, format _json
- The token obtained will be used as part of the properties of the Mule application
- Create a new application
- Load the dashboards, simply copy the xmls provided under
/dashboards/splunk
to{SPLUNK_HOME}/etc/apps/{APP_NAME}/local/data/ui/views
- Restart the Splunk instance
- If you can't copy the dashboard xmls, you can use the UI to create them and using the "Source" option, you can copy & paste the content of the xmls provided
Follow official Splunk documentation: https://docs.splunk.com/Documentation/Splunk/
Name | Description | Default Value |
---|---|---|
splunk.host | HTTP Event Collector (HEC) host | |
splunk.port | HEC port | 8088 |
splunk.protocol | HEC endpoint protocol: HTTPS or HTTP | HTTP |
splunk.token | HEC token | |
splunk.source | HEC source | metrics-source |
splunk.source.type | Source Type | _json (*) |
splunk.index.metrics | Index for storing Platform operational metrics | metrics |
splunk.index.benefits | Index for storing Platform benefits | platform_benefits |
(*): Please note that by default, "Source Types" are created with a limit of 3000 characters. The Metrics Toolkit JSON event will likely exceed this limit. In order to solve that, you must increase this limit adding a new property "TRUNCATE" in the Advanced configuration of the specific Source Type. For example: TRUNCATE = 40000. Depending of the size of your organization, in terms of Business Groups, environments and number of applications and APIs in each environment, this value can be higher.
NOTE: Dashboards were created and tested with Kibana 7.6.2, adjustments may be necessary for other versions
- The toolkit will load data into the
metrics
andplatformbenefits
indexes. Once data is loaded create an index pattern on Kibana for these indexes - Set the loader strategy to
elk
on theapp-{env}.yaml
file, along with theelk.user
andelk.password
parameters in the secureapp-{env}.yaml
file - To load the dashboards, replace the
<YOUR-INDEX-PATTERN-ID>
occurrences on all of the dashboards provided under/dashboards/elk
with your index pattern ID. The index pattern ID can be obtained on Kibana underManagement>>Index Patterns
- Log into your Kibana instance, and on the
Management>>Saved Objects
menu, click on import for each dashboard. This will import all dashboards and visualizations
Follow official ELK documentation: https://www.elastic.co/guide/en/elasticsearch/reference/current/index.html https://www.elastic.co/guide/en/kibana/current/index.html
Name | Description | Default Value |
---|---|---|
elk.host | Elasticsearch host | |
elk.port | Elasticsearch port | 9200 |
elk.protocol | Elasticsearch protocol: HTTP | HTTP |
elk.user | Elasticsearch username | |
elk.password | Elasticsearch password | |
elk.index.metrics | Index for storing Platform operational metrics | metrics |
elk.index.benefits | Index for storing Platform benefits | platformbenefits |
Tableau Dashboards use a JSON File Data Source, which is configured to load data from all JSON files present in a given directory, each JSON file representing a Platform Metrics snapshot.
Metrics Toolkit provides the following approaches to generated those files:
- Using Tableau Strategy (poller or API): this option can only be used if Tableau Desktop can can access the filesystem where Metrics Toolkit is running (cannot be used for CloudHub or Runtime Fabric, since Tableau Desktop won't be able to access Mule Runtime local filesystem).
- Using the Get Platform Metrics API operation (check GET Platform Metrics request provided in the Postman collection): suitable if Metrics Toolkit is running on CloudHub or Runtime Fabric, or if you don't have access to Mule Runtime local filesystem.
To learn more about Tableau, follow the official documentation: https://www.tableau.com/support/help
NOTES:
- Tableau Dashboard Data Source will only use JSON files which has the following naming pattern: platform_metrics_agg_*.json
- Tableau Dashboard Data Source will only be able to use JSON files containing aggregated data. Thus, Tableau won't be able to render dashboards if JSON files contain raw data (check Properties Configuration Section for futher details.
When using the Tableau Strategy, Metrics Toolkit will create the JSON files in the directory defined by tableau.outputDir
property (poller) or by loaderDetails.outputDir
provided in the request (API).
Check POST Platform Metrics - Load - Tableau Strategy Postman collection request if using the API Loader.
IMPORTANT: Make sure aggregation.raw
property (poller) or loaderDetails.rawData
(API) is set to false
- Use JSON loader strategy via API (check POST Platform Metrics - Load - JSON Strategy in the provided Postman collection)
- Save the response content as a JSON file in the directory of your choice. All files must be in the same directory and must respect the naming convention (platform_metrics_agg_yyyyMMddHHmmssSSS.json)
IMPORTANT: Make sure the query parameter raw
is set to false
- Before opening a workbook in Tableau Desktop, make sure that you have at least one JSON file available.
- Make a copy of the desired workbook provided under
/dashboards/tableau
. - After copying the workbook, open it in Tableau Desktop.
- When prompted, edit workbook connection. Select one of the available JSON files and click Ok.
Name | Description | Default Value |
---|---|---|
tableau.outputDir | Directory where JSON files will be written to. |
Workbook | File Name Pattern |
---|---|
current_consolidated | platform_metrics_agg_*.json |
- Enable the dashboard by changing the embedded.dashboard.enabled property to "true"
- Deploy & Run the application
- Use a web browser to access the applications base URL (e.g. if deployed locally, use http://localhost:8081)
- Use the "Login" page to enter your Anypoint platform username, password and organization ID
- Wait for the dashboard to run the metrics request and once done, navigate through the different metrics taken using the UI
Using the sfdc
loader option, and initialising Salesforce Analytics Studio with an empty project, will allow you to quickly inject data into Tableau CRM Dashboards to visualise your Anypoint Platform Metrics in different ways. Tableau CRM can also be used as an historic data repository to allow the metrics data to be displayed against time for trend analysis (e.g. vCore growth, API count or transaction growth over time).
- Create a new application in Data Manager
- If using the poller functionality, configure the SFDC loader properties in the properties files. If using the API, format the request body with the required parameters
- Deploy and Run the metrics toolkit application, allowing the poller to execute, or performing an API request
- Validate the dataset is created in Data Manager
- Create a new data recipe, setting your new dataset as the source. Transform "value" from a dimension to a measure.
- (Optional) - Filter out the non-numeric value fields (API Manager Policies Used and CloudHub Runtime Versions Used) and store this in a new 'enum' dataset
- (Optional) - Create a 'historic' dataset. Add a step to your data recipe to append to this historic dataset More detailed steps can be found in the SFDC specific README.
Note: This is only a high level introduction and it is highly recommended that you become familiar with Tableau CRM through official documentation.
Name | Description | Default Value |
---|---|---|
sfdc.appName | Application name in Tableau CRM | |
sfdc.dataSetName | Dataset name to be created/overwritten in Tableau CRM | |
sfdc.notificationEmail | Email adress for notifications to be sent to | |
sfdc.sendNotification | Occurances which should generate notifications. ENUM values are: ALWAYS,FAILURES,NEVER,WARNINGS | NEVER |
sfdc.username | Salesforce username to use for authentication | |
sfdc.password | Salesforce password | |
sfdc.securityToken | Salesforce developer security token | |
sfdc.authUrl | Specific Salesforce auth URL to be used if required |
NOTE: Data was pushed and tested with Mongodb 4.4.1. Adjustments may be necessary for other versions.
- In MongoDB you will need to create a Database called
matrixdb
and a Collection calledmetrics
. - Set the loader strategy to
mongodb
on theapp-{env}.yaml
file, along with themongodb.username
andmongodb.password
parameters in the secureapp-{env}.yaml
file - Uncomment 2 sections from the pom.xml (dependencies and shared library)
- Rename the file src/main/mule/loaders/loader-mongodb.disabled to src/main/mule/loaders/loader-mongodb.xml
Name | Description | Default Value |
---|---|---|
mongodb.host | MongoDB host | |
mongodb.port | MongoDB port | |
mongodb.database | Database name | |
mongodb.colleciton | Collection Name | |
mongodb.username | MongoDB user | |
mongodb.password | MongoDB password |
- This application can be deployed in any Mule Runtime (OnPrem, CloudHub, RTF)
- The metrics collection will depend on the features available in each account; e.g if the account has the API Manager add-on, the toolkit will collect and aggregate the metrics related to API Manager, otherwise the values will appear as zeroes; if using PCE, there won't be information about API Analytics
- In order to enable or disable specific collectors, you have to change the property "collectors" if using the poller or add a query parameter "collectors" if using the API, including a CSV string as explained in the properties section
- Access Management metrics:
- Not supported on GovCloud
- Exchange reuse metrics:
- Not supported for Private Cloud Edition (PCE)
- API Manager metrics:
- API Manager metrics available only for accounts with the API Manager and Analytics add-on
- Runtime Manager (CloudHub) application metrics:
- CloudHub is not supported on Private Cloud Edition (PCE)
- Runtime Manager (CloudHub) networking metrics - VPCs, VPNs, DLBs and static IPs usage:
- Not supported when authenticating with Connected Apps
- Not supported on GovCloud
- Runtime Manager (RuntimeFabric) metrics:
- Runtime Fabric is not supported on GovCloud
- Runtime Fabric is not supported on Private Cloud Edition (PCE)
- Runtime Manager (Standalone) metrics:
- Runtime Manager (Standalone Runtimes) not supported on GovCloud
- API Platform Client Applications metrics:
- Not supported when authenticating with Connected Apps
- Not supported on GovCloud
- Analytics metrics:
- Not supported on GovCloud
- Not supported on Private Cloud Edition (PCE)
- Not supported when authenticating with Connected Apps
- Anypoint MQ metrics:
- Not supported on Private Cloud Edition (PCE)
- Not supported when authenticating with Connected Apps
- Not supported on GovCloud
- OSv2 metrics:
- Not supported on Private Cloud Edition (PCE)
- Not supported when authenticating with Connected Apps
The toolkit is intended to cover the main areas to define and implement metrics using Mule.
There are several business needs around the definition of metrics. The principal goal is to provide visibility on 4 main areas: People/Teams, Projects, Processes and Technology. By having visibility on these areas, the involved stakeholders can make decisions, anticipate technical and non-technical needs and optimize time and resources
Any system containing raw data
- Basic measurements:
Examples:
- Number of projects
- Number of incidents
- Complex/Composed measurements
Examples:
- Average daily-TPS per application/project
- vCores usage per application/project - monthly
- Indexes/KPIs
Examples:
- Growth
- Tendencies
- Efficiency / Performance
How to link business needs, measurements and data sources?
-
Event: Canonical object representing a metric, based on dimensions and facts
-
Collectors: Processes to Collect raw data from different data sources
-
Aggregators/Transformers: Transform (calculate, compose, enrich, build metric/event normalized/denormalized objects
-
Loaders: Processes to send/load events into external storages/systems
-
Visualization: Third party systems for showing charts and dashboards based on metrics-events (e.g Microstrategy, Tableau, Splunk dashboards, ELK, custom dashboards, BI software, etc)
Enjoy and provide feedback / contribute :)
- All
403
errors, specifically, for the endpoint related to RTF deployments /hybrid/api/v2/ are permissions issues, be sure the user/connected-app has the right permissions (Runtime Manager and Runtime Fabric specifically): Requirements