diff --git a/src/i18n/content/es/docs/ai-monitoring/intro-to-ai-monitoring.mdx b/src/i18n/content/es/docs/ai-monitoring/intro-to-ai-monitoring.mdx
index ceda893ba93..20f1e3a6720 100644
--- a/src/i18n/content/es/docs/ai-monitoring/intro-to-ai-monitoring.mdx
+++ b/src/i18n/content/es/docs/ai-monitoring/intro-to-ai-monitoring.mdx
@@ -5,7 +5,7 @@ freshnessValidatedDate: '2024-11-04T00:00:00.000Z'
translationType: machine
---
-monitoreo de IA es nuestra solución de Monitoreo del rendimiento de aplicaciones (APM)) para IA. Cuando habilita el monitoreo de IA, nuestro agente APM puede brindarle visibilidad de extremo a extremo del rendimiento, el costo y la calidad de [los modelos compatibles](/docs/ai-monitoring/compatibility-requirements-ai-monitoring) de proveedores como OpenAI y BedRock. Explore cómo el usuario interactúa con un asistente de IA, profundice en detalles a nivel de traza sobre la respuesta de un modelo a un evento de IA y compare el rendimiento de diferentes modelos en entornos de aplicaciones.
+monitoreo de IA es nuestra solución de Monitoreo del rendimiento de aplicaciones (APM)) para IA. Cuando habilita el monitoreo de IA, nuestro agente APM puede brindarle visibilidad de extremo a extremo del rendimiento, el costo y la calidad de [los modelos compatibles](/docs/ai-monitoring/compatibility-requirements-ai-monitoring) de proveedores como OpenAI, Bedrock y DeepSeek. Explore cómo el usuario interactúa con un asistente de IA, profundice en detalles a nivel de traza sobre la respuesta de un modelo a un evento de IA y compare el rendimiento de diferentes modelos en entornos de aplicaciones.
@@ -23,7 +23,7 @@ Cuando su asistente de IA recibe un símbolo y devuelve una respuesta, el agente
* Realice un seguimiento de las solicitudes y respuestas que pasan por cualquiera de nuestros modelos compatibles
* Correlacionar comentarios negativos o positivos sobre una respuesta de su usuario final
-Puede acceder a toda esta información y más desde la plataforma New Relic , luego crear alertas y paneles de control para ayudarlo a gestionar eficazmente sus datos de IA y mejorar el rendimiento.
+Puede acceder a toda esta información y más desde la plataforma New Relic , luego crear alertas y dashboards de control para ayudarlo a gestionar eficazmente sus datos de IA y mejorar el rendimiento.
## Mejore el rendimiento de la IA con el monitoreo de IA [#improve-performance]
diff --git a/src/i18n/content/es/docs/alerts/get-notified/new-relic-webhook-for-microsoft-teams-workflow.mdx b/src/i18n/content/es/docs/alerts/get-notified/new-relic-webhook-for-microsoft-teams-workflow.mdx
new file mode 100644
index 00000000000..43474532dd1
--- /dev/null
+++ b/src/i18n/content/es/docs/alerts/get-notified/new-relic-webhook-for-microsoft-teams-workflow.mdx
@@ -0,0 +1,171 @@
+---
+title: Webhook de New Relic para el flujo de trabajo de Microsoft Teams
+tags:
+ - Alerts
+ - Incident intelligence
+ - New Relic webhook for Microsoft Teams workflow
+metaDescription: Read about how to add a New Relic webhook for Microsoft Teams workflow.
+freshnessValidatedDate: never
+translationType: machine
+---
+
+Microsoft retirará el servicio de conectores basado en webhooks de Microsoft 365 en Teams a fines de 2025. Para continuar recibiendo la notificación de alerta de New Relic, puede crear un flujo de trabajo dentro de Microsoft Teams usando la aplicación flujo de trabajo. Este documento proporciona instrucciones para actualizar sus destinos de alerta y flujo de trabajo de New Relic para garantizar una transición fluida y alertas ininterrumpidas en sus canales de Teams. Para obtener más información sobre cómo retirar los servicios de Conectores de Office 365, consulte [el blog para desarrolladores de Microsoft](https://devblogs.microsoft.com/microsoft365dev/retirement-of-office-365-connectors-within-microsoft-teams/).
+
+**Prerrequisitos:**
+
+* Cree un nuevo flujo de trabajo en Microsoft Teams para las alertas de New Relic. Luego de crear el flujo de trabajo, copie la URL de la solicitud POST. Necesitarás esta URL en New Relic. Para obtener más información, consulte [la documentación de Microsoft para crear un flujo de trabajo en Teams](https://support.microsoft.com/en-us/office/create-incoming-webhooks-with-workflows-for-microsoft-teams-8ae491c7-0394-4861-ba59-055e33f75498).
+
+**Para agregar un webhook de New Relic para el flujo de trabajo de Microsoft Teams:**
+
+1. Actualizar el destino del webhook existente:
+
+ 1. Vaya a **[one.newrelic.com > All capabilities](https://one.newrelic.com/all-capabilities) > Alerts > Enrich and Notify > Destinations**.
+ 2. Haga clic en el destino del webhook requerido vinculado a Microsoft Teams para editarlo.
+ 3. Luego de crear un flujo de trabajo en Teams, en el campo **Endpoint URL**, reemplace la URL existente con una nueva URL.
+
+ 4. Haga clic en **Update destination**.
+
+2. Actualizar el flujo de trabajo del webhook existente:
+
+ 1. Vaya a **[one.newrelic.com > All capabilities](https://one.newrelic.com/all-capabilities) > Alerts > Enrich and Notify > Workflows**.
+ 2. Para editar la carga útil de notificación, haga clic en el flujo de trabajo requerido vinculado con el destino.
+
+ 3. En la pantalla Edit notification message , en el campo **Template** , copie y pegue la siguiente carga útil:
+
+ ```json
+ {
+ "type": "message",
+ "attachments": [
+ {
+ "contentType": "application/vnd.microsoft.card.adaptive",
+ "contentUrl": null,
+ "content": {
+ "$schema": "http://adaptivecards.io/schemas/adaptive-card.json",
+ "type": "AdaptiveCard",
+ "version": "1.2",
+ "msteams": { "width": "full" },
+ "body": [
+ {
+ "type": "ColumnSet",
+ "columns": [
+
+ {
+ "type": "Column",
+ "items": [
+ {
+ "type": "Image",
+ "style": "Person",
+ "url": "https://avatars.slack-edge.com/2022-06-02/3611814361970_f6a28959c2e7258660ea_512.png",
+ "size": "Small"
+ }
+ ],
+ "width": "auto"
+ },
+
+ {
+ "type": "Column",
+ "items": [
+ {
+ "type": "TextBlock",
+ "size": "large",
+ "weight": "bolder",
+ "text": "{{ priorityText }} priority issue is {{#if issueClosedAt}}CLOSED{{else}}{{#if issueAcknowledgedAt}}ACKNOWLEDGED{{else}}ACTIVATED{{/if}}{{/if}}"
+ },
+ {
+ "type": "TextBlock",
+ "size": "large",
+ "wrap": "true",
+ "maxLines": "2",
+ "weight": "bolder",
+ "text": "[{{ issueTitle }}]({{ issuePageUrl }})"
+ }
+ ],
+ "width": "stretch"
+ }
+ ]
+ },
+ {{#if accumulations.conditionDescription.[0]}}
+ {
+ "type": "TextBlock",
+ "text": {{ json accumulations.conditionDescription.[0] }},
+ "wrap": true
+ },
+ {{/if}}
+ {{#eq "Not Available" violationChartUrl}}
+ {{else}}
+ {
+ "type": "Image",
+ "url": {{ json violationChartUrl }}
+ },
+ {{/eq}}
+ {
+ "type": "FactSet",
+ "facts": [
+ {
+ "title": "*Impacted entities:*",
+ "value": "{{#each entitiesData.names}}{{#lt @index 5}}{{this}}{{#unless @last}},{{/unless}}{{/lt}}{{/each}}"
+ },
+ {{#if accumulations.policyName }}
+ {
+ "title": "*Policy:*",
+ "value": {{ json accumulations.policyName.[0]}}
+ },
+ {{/if}}
+ {{#if accumulations.conditionName }}
+ {
+ "title": "*Condition:*",
+ "value": {{ json accumulations.conditionName.[0]}}
+ },
+ {{#eq impactedEntitiesCount 1}}
+ {{else}}
+ {
+ "title": "*Total Incidents:*",
+ "value": {{ json impactedEntitiesCount}}
+ },
+ {{/eq}}
+ {{/if}}
+ {
+ "title": "Workflow Name:",
+ "value": {{ json workflowName }}
+ }
+ ]
+ },
+ {
+ "type": "ActionSet",
+ "actions": [
+ {
+ "type": "Action.OpenUrl",
+ "title": "📥 Acknowledge",
+ "url": {{ json issueAckUrl }}
+ },
+ {
+ "type": "Action.OpenUrl",
+ "title": "✔️ Close",
+ "url": {{ json issueCloseUrl }}
+ }
+ {{#if accumulations.deepLinkUrl}}
+ ,{
+ "type": "Action.OpenUrl",
+ "title": "🔎 View Query",
+ "url": {{ json accumulations.deepLinkUrl.[0] }},
+ "mode": "secondary"
+ }
+ {{/if}}
+ {{#if accumulations.runbookUrl}}
+ ,{
+ "type": "Action.OpenUrl",
+ "title": "📕 View Runbook",
+ "url": {{ json accumulations.runbookUrl.[0] }},
+ "mode": "secondary"
+ }
+ {{/if}}
+ ]
+ }
+ ]
+ }
+ }
+ ]
+ }
+ ```
+
+ 4. Haga clic en **Save message**.
\ No newline at end of file
diff --git a/src/i18n/content/es/docs/apm/agents/net-agent/getting-started/net-agent-compatibility-requirements.mdx b/src/i18n/content/es/docs/apm/agents/net-agent/getting-started/net-agent-compatibility-requirements.mdx
index 4ca2ac8fe08..e09d6846a22 100644
--- a/src/i18n/content/es/docs/apm/agents/net-agent/getting-started/net-agent-compatibility-requirements.mdx
+++ b/src/i18n/content/es/docs/apm/agents/net-agent/getting-started/net-agent-compatibility-requirements.mdx
@@ -368,7 +368,7 @@ Para el marco y la biblioteca que no se [instrumentan automáticamente](#instrum
**Microsoft.Data.SqlClient**
* Versión mínima admitida: 1.0.19239.1
- * Última versión compatible verificada: 5.2.1
+ * Latest verified compatible version: 6.0.1
@@ -451,7 +451,7 @@ Para el marco y la biblioteca que no se [instrumentan automáticamente](#instrum
**MySql.Data**
* Versión mínima admitida: 6.10.7
- * Última versión compatible verificada: 9.1.0
+ * Latest verified compatible version: 9.2.0
**MySqlConnector**
@@ -538,7 +538,7 @@ Para el marco y la biblioteca que no se [instrumentan automáticamente](#instrum
* Versión mínima compatible: 3.5.0
- * Última versión compatible verificada: 3.7.405.5
+ * Latest verified compatible version: 3.7.405.13
* Versión mínima del agente requerida: 10.33.0
@@ -631,7 +631,7 @@ Para el marco y la biblioteca que no se [instrumentan automáticamente](#instrum
- 3.7.411.20
+ 3.7.412.4
|
@@ -827,7 +827,7 @@ Para el marco y la biblioteca que no se [instrumentan automáticamente](#instrum
* Versión mínima soportada: 7.1.0
- * Última versión compatible verificada: 8.3.4
+ * Latest verified compatible version: 8.3.6
@@ -841,7 +841,7 @@ Para el marco y la biblioteca que no se [instrumentan automáticamente](#instrum
* Versión mínima compatible: 3.7.0
- * Última versión compatible verificada: 3.7.400.79
+ * Latest verified compatible version: 3.7.400.86
@@ -1248,7 +1248,7 @@ Para el marco y la biblioteca que no se [instrumentan automáticamente](#instrum
**Microsoft.Data.SqlClient**
* Versión mínima admitida: 1.0.19239.1
- * Última versión compatible verificada: 5.2.1
+ * Latest verified compatible version: 6.0.1
**System.Data**
@@ -1316,7 +1316,7 @@ Para el marco y la biblioteca que no se [instrumentan automáticamente](#instrum
**MySql.Data**
* Versión mínima admitida: 6.10.7
- * Última versión compatible verificada: 9.1.0
+ * Latest verified compatible version: 9..0
**MySqlConnector**
@@ -1450,7 +1450,7 @@ Para el marco y la biblioteca que no se [instrumentan automáticamente](#instrum
* Versión mínima compatible: 3.5.0
- * Última versión compatible verificada: 3.7.405.5
+ * Latest verified compatible version: 3.7.405.13
* Versión mínima del agente requerida: 10.33.0
@@ -1601,7 +1601,7 @@ Para el marco y la biblioteca que no se [instrumentan automáticamente](#instrum
- 3.7.411.20
+ 3.7.412.4
|
@@ -1805,7 +1805,7 @@ Para el marco y la biblioteca que no se [instrumentan automáticamente](#instrum
* Versión mínima soportada: 7.1.0
- * Última versión compatible verificada: 8.3.4
+ * Latest verified compatible version: 8.3.6
@@ -1819,7 +1819,7 @@ Para el marco y la biblioteca que no se [instrumentan automáticamente](#instrum
* Versión mínima compatible: 3.7.0
- * Última versión compatible verificada: 3.7.400.79
+ * Latest verified compatible version: 3.7.400.86
diff --git a/src/i18n/content/jp/docs/ai-monitoring/intro-to-ai-monitoring.mdx b/src/i18n/content/jp/docs/ai-monitoring/intro-to-ai-monitoring.mdx
index af8411a8450..975f7411758 100644
--- a/src/i18n/content/jp/docs/ai-monitoring/intro-to-ai-monitoring.mdx
+++ b/src/i18n/content/jp/docs/ai-monitoring/intro-to-ai-monitoring.mdx
@@ -5,7 +5,7 @@ freshnessValidatedDate: '2024-11-04T00:00:00.000Z'
translationType: machine
---
-AIモニタリングは、AI向けアプリケーションモニタリング( APM )のソリューションです。 AI モニタリングを有効にすると、 APMエージェントにより、OpenAI や BedRock などのベンダーが[サポートするモデル](/docs/ai-monitoring/compatibility-requirements-ai-monitoring)のパフォーマンス、コスト、品質をエンドツーエンドで可視化できます。 ユーザーが AI アシスタントとどのように対話するかを調べ、AI イベントに対するモデルの応答に関するトレース レベルの詳細を掘り下げ、アプリ環境間でさまざまなモデルのパフォーマンスを比較します。
+AIモニタリングは、AI向けアプリケーションモニタリング( APM )のソリューションです。 AI モニタリングを有効にすると、 APMエージェントにより、OpenAI、Bedrock、DeepSeek などのベンダーが[サポートするモデル](/docs/ai-monitoring/compatibility-requirements-ai-monitoring)のパフォーマンス、コスト、品質をエンドツーエンドで可視化できます。 ユーザーが AI アシスタントとどのように対話するかを調べ、AI イベントに対するモデルの応答に関するトレース レベルの詳細を掘り下げ、アプリ環境間でさまざまなモデルのパフォーマンスを比較します。
diff --git a/src/i18n/content/jp/docs/alerts/get-notified/new-relic-webhook-for-microsoft-teams-workflow.mdx b/src/i18n/content/jp/docs/alerts/get-notified/new-relic-webhook-for-microsoft-teams-workflow.mdx
new file mode 100644
index 00000000000..b23d18f7c57
--- /dev/null
+++ b/src/i18n/content/jp/docs/alerts/get-notified/new-relic-webhook-for-microsoft-teams-workflow.mdx
@@ -0,0 +1,171 @@
+---
+title: Microsoft Teams ワークフロー用の New Relic Webhook
+tags:
+ - Alerts
+ - Incident intelligence
+ - New Relic webhook for Microsoft Teams workflow
+metaDescription: Read about how to add a New Relic webhook for Microsoft Teams workflow.
+freshnessValidatedDate: never
+translationType: machine
+---
+
+Microsoft は、2025 年末までに Teams の Microsoft 365 Webhook ベースのコネクタ サービスを廃止する予定です。 New Relic 一括通知を引き続き受け取るには、ワークフロー アプリを使用して Microsoft Teams 内でワークフローを作成できます。 このドキュメントでは、Teams チャネルでのスムーズな移行と中断のないアラートを確保するために、New Relic 一括宛先とワークフローを更新する手順を説明します。 Office 365 コネクタ サービスの廃止の詳細については、 [Microsoft 開発者ブログ](https://devblogs.microsoft.com/microsoft365dev/retirement-of-office-365-connectors-within-microsoft-teams/)を参照してください。
+
+**前提条件:**
+
+* Microsoft Teams で New Relic アラート用の新しいワークフローを作成します。 ワークフローを作成したら、POST リクエスト URL をコピーします。 この URL は New Relic で必要になります。 詳細については、 [Teams でワークフローを作成するための Microsoft のドキュメント](https://support.microsoft.com/en-us/office/create-incoming-webhooks-with-workflows-for-microsoft-teams-8ae491c7-0394-4861-ba59-055e33f75498)を参照してください。
+
+**Microsoft Teams ワークフローに New Relic Webhook を追加するには:**
+
+1. 既存の Webhook の宛先を更新します。
+
+ 1. **[one.newrelic.com > All capabilities](https://one.newrelic.com/all-capabilities)> Alerts > Enrich and Notify > Destinations**に移動します。
+ 2. 編集するには、Microsoft Teams にリンクされている必要な Webhook の宛先をクリックします。
+ 3. Teams でワークフローを作成した後、 **Endpoint URL** \[エンドポイント URL]フィールドで既存の URL を新しい URL に置き換えます。
+
+ 4. **Update destination** \[宛先の更新]をクリックします。
+
+2. 既存の Webhook ワークフローを更新します。
+
+ 1. **[one.newrelic.com > All capabilities](https://one.newrelic.com/all-capabilities)> Alerts > Enrich and Notify > Workflows**に移動します。
+ 2. 通知ペイロードを編集するには、宛先にリンクされている必要なワークフローをクリックします。
+
+ 3. Edit notification message画面の**Template** \[テンプレート]フィールドに、次のペイロードをコピーして貼り付けます。
+
+ ```json
+ {
+ "type": "message",
+ "attachments": [
+ {
+ "contentType": "application/vnd.microsoft.card.adaptive",
+ "contentUrl": null,
+ "content": {
+ "$schema": "http://adaptivecards.io/schemas/adaptive-card.json",
+ "type": "AdaptiveCard",
+ "version": "1.2",
+ "msteams": { "width": "full" },
+ "body": [
+ {
+ "type": "ColumnSet",
+ "columns": [
+
+ {
+ "type": "Column",
+ "items": [
+ {
+ "type": "Image",
+ "style": "Person",
+ "url": "https://avatars.slack-edge.com/2022-06-02/3611814361970_f6a28959c2e7258660ea_512.png",
+ "size": "Small"
+ }
+ ],
+ "width": "auto"
+ },
+
+ {
+ "type": "Column",
+ "items": [
+ {
+ "type": "TextBlock",
+ "size": "large",
+ "weight": "bolder",
+ "text": "{{ priorityText }} priority issue is {{#if issueClosedAt}}CLOSED{{else}}{{#if issueAcknowledgedAt}}ACKNOWLEDGED{{else}}ACTIVATED{{/if}}{{/if}}"
+ },
+ {
+ "type": "TextBlock",
+ "size": "large",
+ "wrap": "true",
+ "maxLines": "2",
+ "weight": "bolder",
+ "text": "[{{ issueTitle }}]({{ issuePageUrl }})"
+ }
+ ],
+ "width": "stretch"
+ }
+ ]
+ },
+ {{#if accumulations.conditionDescription.[0]}}
+ {
+ "type": "TextBlock",
+ "text": {{ json accumulations.conditionDescription.[0] }},
+ "wrap": true
+ },
+ {{/if}}
+ {{#eq "Not Available" violationChartUrl}}
+ {{else}}
+ {
+ "type": "Image",
+ "url": {{ json violationChartUrl }}
+ },
+ {{/eq}}
+ {
+ "type": "FactSet",
+ "facts": [
+ {
+ "title": "*Impacted entities:*",
+ "value": "{{#each entitiesData.names}}{{#lt @index 5}}{{this}}{{#unless @last}},{{/unless}}{{/lt}}{{/each}}"
+ },
+ {{#if accumulations.policyName }}
+ {
+ "title": "*Policy:*",
+ "value": {{ json accumulations.policyName.[0]}}
+ },
+ {{/if}}
+ {{#if accumulations.conditionName }}
+ {
+ "title": "*Condition:*",
+ "value": {{ json accumulations.conditionName.[0]}}
+ },
+ {{#eq impactedEntitiesCount 1}}
+ {{else}}
+ {
+ "title": "*Total Incidents:*",
+ "value": {{ json impactedEntitiesCount}}
+ },
+ {{/eq}}
+ {{/if}}
+ {
+ "title": "Workflow Name:",
+ "value": {{ json workflowName }}
+ }
+ ]
+ },
+ {
+ "type": "ActionSet",
+ "actions": [
+ {
+ "type": "Action.OpenUrl",
+ "title": "📥 Acknowledge",
+ "url": {{ json issueAckUrl }}
+ },
+ {
+ "type": "Action.OpenUrl",
+ "title": "✔️ Close",
+ "url": {{ json issueCloseUrl }}
+ }
+ {{#if accumulations.deepLinkUrl}}
+ ,{
+ "type": "Action.OpenUrl",
+ "title": "🔎 View Query",
+ "url": {{ json accumulations.deepLinkUrl.[0] }},
+ "mode": "secondary"
+ }
+ {{/if}}
+ {{#if accumulations.runbookUrl}}
+ ,{
+ "type": "Action.OpenUrl",
+ "title": "📕 View Runbook",
+ "url": {{ json accumulations.runbookUrl.[0] }},
+ "mode": "secondary"
+ }
+ {{/if}}
+ ]
+ }
+ ]
+ }
+ }
+ ]
+ }
+ ```
+
+ 4. **Save message** \[メッセージを保存]をクリックします。
\ No newline at end of file
diff --git a/src/i18n/content/jp/docs/infrastructure/host-integrations/host-integrations-list/snowflake-integration.mdx b/src/i18n/content/jp/docs/infrastructure/host-integrations/host-integrations-list/snowflake-integration.mdx
new file mode 100644
index 00000000000..ca39f468725
--- /dev/null
+++ b/src/i18n/content/jp/docs/infrastructure/host-integrations/host-integrations-list/snowflake-integration.mdx
@@ -0,0 +1,231 @@
+---
+title: フレックス付きSnowflakeインテグレーション
+tags:
+ - Snowflake integration
+ - New Relic integrations
+metaDescription: Install our Snowflake dashboards and see your Snowflake data in New Relic.
+freshnessValidatedDate: never
+translationType: machine
+---
+
+当社の Snowflake インテグレーションにより、クエリのパフォーマンス、ストレージ システムの健全性、ウェアハウスの状態、請求情報など、さまざまな側面に関する包括的なデータを収集できるようになります。
+
+
+
+
+ Snowflake と New Relic の統合を設定したら、すぐにこのようなダッシュボードでデータを確認できます。
+
+
+
+
+ ## インフラストラクチャエージェントをインストールします [#infra-install]
+
+ Snowflake インテグレーションを使用するには、同じホストに[インフラストラクチャエージェントもインストールする](/docs/infrastructure/install-infrastructure-agent/get-started/install-infrastructure-agent-new-relic/)必要があります。 インフラストラクチャエージェントはホスト自体を監視しますが、次の手順でインストールするインテグレーションは Snowflake 固有のデータを使用して監視を拡張します。
+
+
+
+ ## Snowflakeメトリクスのセットアップ
+
+ 以下のコマンドを実行して、Snowflake メトリックを JSON 形式で保存し、nri-flex が読み取れるようにします。 ACCOUNT、USERNAME、SNOWSQL\_PWD を適宜変更してください。
+
+ ```shell
+
+ # Run the below command as a 1 minute cronjob
+ SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT SERVICE_TYPE, NAME, AVG(CREDITS_USED_COMPUTE) AS "CREDITS_USED_COMPUTE_AVERAGE", SUM(CREDITS_USED_COMPUTE) AS "CREDITS_USED_COMPUTE_SUM", AVG(CREDITS_USED_CLOUD_SERVICES) AS "CREDITS_USED_CLOUD_SERVICES_AVERAGE", SUM(CREDITS_USED_CLOUD_SERVICES) AS "CREDITS_USED_CLOUD_SERVICES_SUM", AVG(CREDITS_USED) AS "CREDITS_USED_AVERAGE", SUM(CREDITS_USED) AS "CREDITS_USED_SUM" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."METERING_HISTORY" WHERE start_time >= DATE_TRUNC(day, CURRENT_DATE()) GROUP BY 1, 2;' > /tmp/snowflake-account-metering.json
+ SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT WAREHOUSE_NAME, AVG(AVG_RUNNING) AS "RUNNING_AVERAGE", AVG(AVG_QUEUED_LOAD) AS "QUEUED_LOAD_AVERAGE", AVG(AVG_QUEUED_PROVISIONING) AS "QUEUED_PROVISIONING_AVERAGE", AVG(AVG_BLOCKED) AS "BLOCKED_AVERAGE" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."WAREHOUSE_LOAD_HISTORY" GROUP BY 1;' > /tmp/snowflake-warehouse-load-history-metrics.json
+ SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT WAREHOUSE_NAME, avg(CREDITS_USED_COMPUTE) as "CREDITS_USED_COMPUTE_AVERAGE", sum(CREDITS_USED_COMPUTE) as "CREDITS_USED_COMPUTE_SUM", avg(CREDITS_USED_CLOUD_SERVICES) as "CREDITS_USED_CLOUD_SERVICES_AVERAGE", sum(CREDITS_USED_CLOUD_SERVICES) as "CREDITS_USED_CLOUD_SERVICES_SUM", avg(CREDITS_USED) as "CREDITS_USED_AVERAGE", sum(CREDITS_USED) as "CREDITS_USED_SUM" from "SNOWFLAKE"."ACCOUNT_USAGE"."WAREHOUSE_METERING_HISTORY" group by 1;' > /tmp/snowflake-warehouse-metering.json
+ SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT table_name, table_schema, avg(ACTIVE_BYTES) as "ACTIVE_BYTES_AVERAGE", avg(TIME_TRAVEL_BYTES) as "TIME_TRAVEL_BYTES_AVERAGE", avg(FAILSAFE_BYTES) as "FAILSAFE_BYTES_AVERAGE", avg(RETAINED_FOR_CLONE_BYTES) as "RETAINED_FOR_CLONE_BYTES_AVERAGE" from "SNOWFLAKE"."ACCOUNT_USAGE"."TABLE_STORAGE_METRICS" group by 1, 2;' > /tmp/snowflake-table-storage-metrics.json
+ SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT STORAGE_BYTES, STAGE_BYTES, FAILSAFE_BYTES FROM "SNOWFLAKE"."ACCOUNT_USAGE"."STORAGE_USAGE" ORDER BY USAGE_DATE DESC LIMIT 1;' > /tmp/snowflake-storage-usage.json
+ SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT USAGE_DATE, AVG(AVERAGE_STAGE_BYTES) FROM "SNOWFLAKE"."ACCOUNT_USAGE"."STAGE_STORAGE_USAGE_HISTORY" GROUP BY USAGE_DATE;' > /tmp/snowflake-stage-storage-usage-history.json
+ SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT DATABASE_NAME, AVG(CREDITS_USED) AS "CREDITS_USED_AVERAGE", SUM(CREDITS_USED) AS "CREDITS_USED_SUM", AVG(BYTES_TRANSFERRED) AS "BYTES_TRANSFERRED_AVERAGE", SUM(BYTES_TRANSFERRED) AS "BYTES_TRANSFERRED_SUM" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."REPLICATION_USAGE_HISTORY" GROUP BY DATABASE_NAME;' > /tmp/snowflake-replication-usage-history.json
+ SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT QUERY_TYPE, WAREHOUSE_NAME, DATABASE_NAME, SCHEMA_NAME, AVG(EXECUTION_TIME) AS "EXECUTION_TIME_AVERAGE", AVG(COMPILATION_TIME) AS "COMPILATION_TIME_AVERAGE", AVG(BYTES_SCANNED) AS "BYTES_SCANNED_AVERAGE", AVG(BYTES_WRITTEN) AS "BYTES_WRITTEN_AVERAGE", AVG(BYTES_DELETED) AS "BYTES_DELETED_AVERAGE", AVG(BYTES_SPILLED_TO_LOCAL_STORAGE) AS "BYTES_SPILLED_TO_LOCAL_STORAGE_AVERAGE", AVG(BYTES_SPILLED_TO_REMOTE_STORAGE) AS "BYTES_SPILLED_TO_REMOTE_STORAGE_AVERAGE" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."QUERY_HISTORY" GROUP BY QUERY_TYPE, WAREHOUSE_NAME, DATABASE_NAME, SCHEMA_NAME;' > /tmp/snowflake-query-history.json
+ SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT PIPE_NAME, AVG(CREDITS_USED) AS "CREDITS_USED_AVERAGE", SUM(CREDITS_USED) AS "CREDITS_USED_SUM", AVG(BYTES_INSERTED) AS "BYTES_INSERTED_AVERAGE", SUM(BYTES_INSERTED) AS "BYTES_INSERTED_SUM", AVG(FILES_INSERTED) AS "FILES_INSERTED_AVERAGE", SUM(FILES_INSERTED) AS "FILES_INSERTED_SUM" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."PIPE_USAGE_HISTORY" GROUP BY PIPE_NAME;' > /tmp/snowflake-pipe-usage.json
+ SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT QUERY_ID, QUERY_TEXT, (EXECUTION_TIME / 60000) AS EXEC_TIME, WAREHOUSE_NAME, USER_NAME, EXECUTION_STATUS FROM "SNOWFLAKE"."ACCOUNT_USAGE"."QUERY_HISTORY" WHERE EXECUTION_STATUS = '\''SUCCESS'\'' ORDER BY EXECUTION_TIME DESC;' > /tmp/snowflake-longest-queries.json
+ SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT EVENT_ID, EVENT_TIMESTAMP, EVENT_TYPE, REPORTED_CLIENT_TYPE, REPORTED_CLIENT_VERSION, FIRST_AUTHENTICATION_FACTOR, SECOND_AUTHENTICATION_FACTOR, IS_SUCCESS, ERROR_CODE, ERROR_MESSAGE FROM "SNOWFLAKE"."ACCOUNT_USAGE"."LOGIN_HISTORY" WHERE IS_SUCCESS = '\''NO'\'';' > /tmp/snowflake-login-failures.json
+ SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT DATABASE_NAME, AVERAGE_DATABASE_BYTES, AVERAGE_FAILSAFE_BYTES FROM "SNOWFLAKE"."ACCOUNT_USAGE"."DATABASE_STORAGE_USAGE_HISTORY" ORDER BY USAGE_DATE DESC LIMIT 1;' > /tmp/snowflake-database-storage-usage.json
+ SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT SOURCE_CLOUD, SOURCE_REGION, TARGET_CLOUD, TARGET_REGION, TRANSFER_TYPE, AVG(BYTES_TRANSFERRED) AS "BYTES_TRANSFERRED_AVERAGE", SUM(BYTES_TRANSFERRED) AS "BYTES_TRANSFERRED_SUM" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."DATA_TRANSFER_HISTORY" GROUP BY 1, 2, 3, 4, 5;' > /tmp/snowflake-data-transfer-usage.json
+ SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT WAREHOUSE_NAME, SUM(CREDITS_USED) AS TOTAL_CREDITS_USED FROM "SNOWFLAKE"."ACCOUNT_USAGE"."WAREHOUSE_METERING_HISTORY" GROUP BY 1 ORDER BY 2 DESC;' > /tmp/snowflake-credit-usage-by-warehouse.json
+ SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT TABLE_NAME, DATABASE_NAME, SCHEMA_NAME, AVG(CREDITS_USED) AS "CREDITS_USED_AVERAGE", SUM(CREDITS_USED) AS "CREDITS_USED_SUM", AVG(NUM_BYTES_RECLUSTERED) AS "BYTES_RECLUSTERED_AVERAGE", SUM(NUM_BYTES_RECLUSTERED) AS "BYTES_RECLUSTERED_SUM", AVG(NUM_ROWS_RECLUSTERED) AS "ROWS_RECLUSTERED_AVERAGE", SUM(NUM_ROWS_RECLUSTERED) AS "ROWS_RECLUSTERED_SUM" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."AUTOMATIC_CLUSTERING_HISTORY" GROUP BY 1, 2, 3;' > /tmp/snowflake-automatic-clustering.json
+ SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'select USER_NAME,EVENT_TYPE,IS_SUCCESS,ERROR_CODE,ERROR_MESSAGE,FIRST_AUTHENTICATION_FACTOR,SECOND_AUTHENTICATION_FACTOR from "SNOWFLAKE"."ACCOUNT_USAGE"."LOGIN_HISTORY";' > /tmp/snowflake-account-details.json
+
+ ```
+
+
+
+ ## nri-flexでSnowflakeインテグレーションを有効にする
+
+ Snowflake インテグレーションをセットアップするには、次の手順に従います。
+
+ 1. Integration ディレクトリに`nri-snowflake-config.yml`という名前のファイルを作成します。
+
+ ```shell
+
+ touch /etc/newrelic-infra/integrations.d/nri-snowflake-config.yml
+
+ ```
+
+ 2. エージェントが Snowflake データをキャプチャできるようにするには、次のスニペットを`nri-snowflake-config.yml`ファイルに追加します。
+
+ ```yml
+
+ ---
+ integrations:
+ - name: nri-flex
+ interval: 30s
+ config:
+ name: snowflakeAccountMetering
+ apis:
+ - name: snowflakeAccountMetering
+ file: /tmp/snowflake-account-metering.json
+ - name: nri-flex
+ interval: 30s
+ config:
+ name: snowflakeWarehouseLoadHistory
+ apis:
+ - name: snowflakeWarehouseLoadHistory
+ file: /tmp/snowflake-warehouse-load-history-metrics.json
+ - name: nri-flex
+ interval: 30s
+ config:
+ name: snowflakeWarehouseMetering
+ apis:
+ - name: snowflakeWarehouseMetering
+ file: /tmp/snowflake-warehouse-metering.json
+ - name: nri-flex
+ interval: 30s
+ config:
+ name: snowflakeTableStorage
+ apis:
+ - name: snowflakeTableStorage
+ file: /tmp/snowflake-table-storage-metrics.json
+ - name: nri-flex
+ interval: 30s
+ config:
+ name: snowflakeStageStorageUsage
+ apis:
+ - name: snowflakeStageStorageUsage
+ file: /tmp/snowflake-stage-storage-usage-history.json
+ - name: nri-flex
+ interval: 30s
+ config:
+ name: snowflakeReplicationUsgae
+ apis:
+ - name: snowflakeReplicationUsgae
+ file: /tmp/snowflake-replication-usage-history.json
+ - name: nri-flex
+ interval: 30s
+ config:
+ name: snowflakeQueryHistory
+ apis:
+ - name: snowflakeQueryHistory
+ file: /tmp/snowflake-query-history.json
+ - name: nri-flex
+ interval: 30s
+ config:
+ name: snowflakePipeUsage
+ apis:
+ - name: snowflakePipeUsage
+ file: /tmp/snowflake-pipe-usage.json
+ - name: nri-flex
+ interval: 30s
+ config:
+ name: snowflakeLongestQueries
+ apis:
+ - name: snowflakeLongestQueries
+ file: /tmp/snowflake-longest-queries.json
+ - name: nri-flex
+ interval: 30s
+ config:
+ name: snowflakeLoginFailure
+ apis:
+ - name: snowflakeLoginFailure
+ file: /tmp/snowflake-login-failures.json
+ - name: nri-flex
+ interval: 30s
+ config:
+ name: snowflakeDatabaseStorageUsage
+ apis:
+ - name: snowflakeDatabaseStorageUsage
+ file: /tmp/snowflake-database-storage-usage.json
+ - name: nri-flex
+ interval: 30s
+ config:
+ name: snowflakeDataTransferUsage
+ apis:
+ - name: snowflakeDataTransferUsage
+ file: /tmp/snowflake-data-transfer-usage.json
+ - name: nri-flex
+ interval: 30s
+ config:
+ name: snowflakeCreditUsageByWarehouse
+ apis:
+ - name: snowflakeCreditUsageByWarehouse
+ file: /tmp/snowflake-credit-usage-by-warehouse.json
+ - name: nri-flex
+ interval: 30s
+ config:
+ name: snowflakeAutomaticClustering
+ apis:
+ - name: snowflakeAutomaticClustering
+ file: /tmp/snowflake-automatic-clustering.json
+ - name: nri-flex
+ interval: 30s
+ config:
+ name: snowflakeStorageUsage
+ apis:
+ - name: snowflakeStorageUsage
+ file: /tmp/snowflake-storage-usage.json
+ - name: nri-flex
+ interval: 30s
+ config:
+ name: snowflakeAccountDetails
+ apis:
+ - name: snowflakeAccountDetails
+ file: /tmp/snowflake-account-details.json
+
+ ```
+
+
+
+ ## New Relic インフラストラクチャ エージェントを再起動します
+
+ インフラストラクチャ エージェントを再起動します。
+
+ ```shell
+
+ sudo systemctl restart newrelic-infra.service
+
+ ```
+
+ 数分以内に、アプリケーションはメトリクスを [one.newrelic.com](https://one.newrelic.com)に送信します。
+
+
+
+ ## データを検索する
+
+ `Snowflake`という名前の事前に構築されたダッシュボード テンプレートを選択して、Snowflake アプリケーション メトリックを監視できます。 事前に構築されたダッシュボード テンプレートを使用するには、次の手順に従ってください。
+
+ 1. [one.newrelic.com](https://one.newrelic.com)から、 **+ Add data**ページに移動します。
+ 2. クリック **Dashboards.**
+ 3. 検索バーに`Snowflake`と入力します。
+ 4. Snowflake ダッシュボードが表示されます。 クリックしてインストールしてください
+
+ Snowflake ダッシュボードはカスタム ダッシュボードとみなされ、**Dashboards** UIで確認できます。 ダッシュボードの使用と編集に関するドキュメントについては、[ダッシュボードのドキュメント](/docs/query-your-data/explore-query-data/dashboards/introduction-dashboards)をご覧ください。
+
+ 以下は、Snowflake メトリックを確認するためのNRQLクエリです。
+
+ ```sql
+
+ SELECT * from snowflakeAccountSample
+
+ ```
+
+
+
+## 次は何ですか?
+
+NRQL クエリの作成とダッシュボードの生成の詳細については、次のドキュメントをご覧ください。
+
+* 基本的なクエリと高度なクエリを作成する[ためのクエリ ビルダーの概要](/docs/query-your-data/explore-query-data/query-builder/introduction-query-builder)。
+* [ダッシュボードをカスタマイズしてさまざまなアクションを実行するためのダッシュボードの概要](/docs/query-your-data/explore-query-data/dashboards/introduction-dashboards)。
+* ダッシュボードを[管理して、ダッシュ](/docs/query-your-data/explore-query-data/dashboards/manage-your-dashboard)ボードの表示モードを調整したり、ダッシュボードにコンテンツを追加したりします。
\ No newline at end of file
diff --git a/src/i18n/content/kr/docs/ai-monitoring/intro-to-ai-monitoring.mdx b/src/i18n/content/kr/docs/ai-monitoring/intro-to-ai-monitoring.mdx
index f0f32683822..c12c1904dd7 100644
--- a/src/i18n/content/kr/docs/ai-monitoring/intro-to-ai-monitoring.mdx
+++ b/src/i18n/content/kr/docs/ai-monitoring/intro-to-ai-monitoring.mdx
@@ -5,7 +5,7 @@ freshnessValidatedDate: '2024-11-04T00:00:00.000Z'
translationType: machine
---
-AI 모니터링은 AI에 대한 성능 모니터링(APM)을 위한 솔루션입니다. AI 모니터링을 활성화하면 APM 에이전트가 OpenAI 및 BedRock과 같은 공급업체가 [지원하는 모델](/docs/ai-monitoring/compatibility-requirements-ai-monitoring) 의 성능, 비용 및 품질에 대한 종단 간 가시성을 제공할 수 있습니다. 사용자가 AI 어시스턴트와 상호 작용하는 방법을 살펴보고, AI 이벤트에 대한 모델의 응답에 대한 트레이스 수준의 세부 정보를 살펴보고, 앱 환경 전반에서 다양한 모델의 성능을 비교하세요.
+AI 모니터링은 AI에 대한 성능 모니터링(APM)을 위한 솔루션입니다. AI 모니터링을 활성화하면 APM 에이전트가 OpenAI, Bedrock, DeepSeek 등의 공급업체가 [지원하는 모델](/docs/ai-monitoring/compatibility-requirements-ai-monitoring) 의 성능, 비용, 품질에 대한 종단 간 가시성을 제공할 수 있습니다. 사용자가 AI 어시스턴트와 상호 작용하는 방법을 살펴보고, AI 이벤트에 대한 모델의 응답에 대한 트레이스 수준의 세부 정보를 살펴보고, 앱 환경 전반에서 다양한 모델의 성능을 비교하세요.
diff --git a/src/i18n/content/kr/docs/alerts/get-notified/new-relic-webhook-for-microsoft-teams-workflow.mdx b/src/i18n/content/kr/docs/alerts/get-notified/new-relic-webhook-for-microsoft-teams-workflow.mdx
new file mode 100644
index 00000000000..44d4e8b3c42
--- /dev/null
+++ b/src/i18n/content/kr/docs/alerts/get-notified/new-relic-webhook-for-microsoft-teams-workflow.mdx
@@ -0,0 +1,171 @@
+---
+title: Microsoft Teams용 뉴렐릭 webhook
+tags:
+ - Alerts
+ - Incident intelligence
+ - New Relic webhook for Microsoft Teams workflow
+metaDescription: Read about how to add a New Relic webhook for Microsoft Teams workflow.
+freshnessValidatedDate: never
+translationType: machine
+---
+
+Microsoft는 2025년 말까지 Teams에서 Microsoft 365 웹훅 기반 커넥터 서비스를 종료할 예정입니다. 뉴렐릭 공지 공지를 계속해서 받아보시려면, Microsoft Teams 내에서 워크플로우 앱을 사용하여 플로우를 생성하시면 됩니다. 이 문서에서는 Teams 채널에서 원활한 전환과 중단 없는 알림을 보장하기 위해 NEWLLIC 공지 대상지와 스텔라우를 업데이트하는 방법에 대한 지침을 제공합니다. Office 365 Connectors 서비스 중단에 대한 자세한 내용은 [Microsoft 개발자 블로그를](https://devblogs.microsoft.com/microsoft365dev/retirement-of-office-365-connectors-within-microsoft-teams/) 참조하세요.
+
+**필수 조건:**
+
+* 뉴렐릭 알림을 위해 Microsoft Teams에서 새로운 워크플로우를 만듭니다. 스텔라우를 생성한 후 POST 요청 URL을 복사합니다. 이 URL은 뉴렐릭에서 필요합니다. 자세한 내용은 [Teams에서 하늘우를 만드는 방법에 대한 Microsoft 설명서를](https://support.microsoft.com/en-us/office/create-incoming-webhooks-with-workflows-for-microsoft-teams-8ae491c7-0394-4861-ba59-055e33f75498) 참조하세요.
+
+**Microsoft Teams에 대한 웹훅 웹훅을 추가하려면:**
+
+1. 기존 웹훅 대상 업데이트:
+
+ 1. **[one.newrelic.com > All capabilities](https://one.newrelic.com/all-capabilities) > Alerts > Enrich and Notify > Destinations** 으로 이동합니다.
+ 2. 편집하려면 Microsoft Teams에 연결된 필수 웹훅 대상을 클릭하세요.
+ 3. Teams에서 워크플로우를 생성한 후, **Endpoint URL** \[입체포인트 URL] 항목에서 기존 URL을 새로운 URL로 교체합니다.
+
+ 4. **Update destination** \[목적지 업데이트를] 클릭하세요.
+
+2. 기존 웹훅 업데이트:
+
+ 1. **[one.newrelic.com > All capabilities](https://one.newrelic.com/all-capabilities) > Alerts > Enrich and Notify > Workflows** 로 이동합니다.
+ 2. 공지 페이로드를 편집하려면 목적지와 연결된 필수 워크플로우를 클릭하세요.
+
+ 3. Edit notification message 화면에서 **Template** \[템플릿] 필드에 다음 페이로드를 복사하여 붙여넣습니다.
+
+ ```json
+ {
+ "type": "message",
+ "attachments": [
+ {
+ "contentType": "application/vnd.microsoft.card.adaptive",
+ "contentUrl": null,
+ "content": {
+ "$schema": "http://adaptivecards.io/schemas/adaptive-card.json",
+ "type": "AdaptiveCard",
+ "version": "1.2",
+ "msteams": { "width": "full" },
+ "body": [
+ {
+ "type": "ColumnSet",
+ "columns": [
+
+ {
+ "type": "Column",
+ "items": [
+ {
+ "type": "Image",
+ "style": "Person",
+ "url": "https://avatars.slack-edge.com/2022-06-02/3611814361970_f6a28959c2e7258660ea_512.png",
+ "size": "Small"
+ }
+ ],
+ "width": "auto"
+ },
+
+ {
+ "type": "Column",
+ "items": [
+ {
+ "type": "TextBlock",
+ "size": "large",
+ "weight": "bolder",
+ "text": "{{ priorityText }} priority issue is {{#if issueClosedAt}}CLOSED{{else}}{{#if issueAcknowledgedAt}}ACKNOWLEDGED{{else}}ACTIVATED{{/if}}{{/if}}"
+ },
+ {
+ "type": "TextBlock",
+ "size": "large",
+ "wrap": "true",
+ "maxLines": "2",
+ "weight": "bolder",
+ "text": "[{{ issueTitle }}]({{ issuePageUrl }})"
+ }
+ ],
+ "width": "stretch"
+ }
+ ]
+ },
+ {{#if accumulations.conditionDescription.[0]}}
+ {
+ "type": "TextBlock",
+ "text": {{ json accumulations.conditionDescription.[0] }},
+ "wrap": true
+ },
+ {{/if}}
+ {{#eq "Not Available" violationChartUrl}}
+ {{else}}
+ {
+ "type": "Image",
+ "url": {{ json violationChartUrl }}
+ },
+ {{/eq}}
+ {
+ "type": "FactSet",
+ "facts": [
+ {
+ "title": "*Impacted entities:*",
+ "value": "{{#each entitiesData.names}}{{#lt @index 5}}{{this}}{{#unless @last}},{{/unless}}{{/lt}}{{/each}}"
+ },
+ {{#if accumulations.policyName }}
+ {
+ "title": "*Policy:*",
+ "value": {{ json accumulations.policyName.[0]}}
+ },
+ {{/if}}
+ {{#if accumulations.conditionName }}
+ {
+ "title": "*Condition:*",
+ "value": {{ json accumulations.conditionName.[0]}}
+ },
+ {{#eq impactedEntitiesCount 1}}
+ {{else}}
+ {
+ "title": "*Total Incidents:*",
+ "value": {{ json impactedEntitiesCount}}
+ },
+ {{/eq}}
+ {{/if}}
+ {
+ "title": "Workflow Name:",
+ "value": {{ json workflowName }}
+ }
+ ]
+ },
+ {
+ "type": "ActionSet",
+ "actions": [
+ {
+ "type": "Action.OpenUrl",
+ "title": "📥 Acknowledge",
+ "url": {{ json issueAckUrl }}
+ },
+ {
+ "type": "Action.OpenUrl",
+ "title": "✔️ Close",
+ "url": {{ json issueCloseUrl }}
+ }
+ {{#if accumulations.deepLinkUrl}}
+ ,{
+ "type": "Action.OpenUrl",
+ "title": "🔎 View Query",
+ "url": {{ json accumulations.deepLinkUrl.[0] }},
+ "mode": "secondary"
+ }
+ {{/if}}
+ {{#if accumulations.runbookUrl}}
+ ,{
+ "type": "Action.OpenUrl",
+ "title": "📕 View Runbook",
+ "url": {{ json accumulations.runbookUrl.[0] }},
+ "mode": "secondary"
+ }
+ {{/if}}
+ ]
+ }
+ ]
+ }
+ }
+ ]
+ }
+ ```
+
+ 4. **Save message** \[메시지 저장 을] 클릭합니다.
\ No newline at end of file
diff --git a/src/i18n/content/kr/docs/infrastructure/host-integrations/host-integrations-list/snowflake-integration.mdx b/src/i18n/content/kr/docs/infrastructure/host-integrations/host-integrations-list/snowflake-integration.mdx
new file mode 100644
index 00000000000..0bcf2f8e2b2
--- /dev/null
+++ b/src/i18n/content/kr/docs/infrastructure/host-integrations/host-integrations-list/snowflake-integration.mdx
@@ -0,0 +1,231 @@
+---
+title: Flex와 Snowflake 통합
+tags:
+ - Snowflake integration
+ - New Relic integrations
+metaDescription: Install our Snowflake dashboards and see your Snowflake data in New Relic.
+freshnessValidatedDate: never
+translationType: machine
+---
+
+Snowflake 통합을 통해 쿼리 성능, 스토리지 시스템 상태, 창고 상태, 청구 정보 등 다양한 측면에 대한 포괄적인 데이터를 수집할 수 있습니다.
+
+
+
+
+ New Relic과 Snowflake 통합을 설정한 후 즉시 사용 가능한 것과 같은 대시보드에서 데이터를 확인하십시오.
+
+
+
+
+ ## 인프라 에이전트 설치 [#infra-install]
+
+ Snowflake 통합을 사용하려면 동일한 호스트에 [인프라 에이전트도 설치](/docs/infrastructure/install-infrastructure-agent/get-started/install-infrastructure-agent-new-relic/) 해야 합니다. 클라이언트 에이전트는 호스트 자체를 모니터링하는 반면, 다음 단계에서 설치하게 될 통합은 Snowflake 관련 데이터로 모니터링을 확장합니다.
+
+
+
+ ## Snowflake 지표 설정
+
+ 아래 명령을 실행하여 Snowflake 지수를 JSON 형식으로 저장하면 nri-flex에서 읽을 수 있습니다. ACCOUNT, USERNAME 및 SNOWSQL\_PWD를 적절하게 수정하십시오.
+
+ ```shell
+
+ # Run the below command as a 1 minute cronjob
+ SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT SERVICE_TYPE, NAME, AVG(CREDITS_USED_COMPUTE) AS "CREDITS_USED_COMPUTE_AVERAGE", SUM(CREDITS_USED_COMPUTE) AS "CREDITS_USED_COMPUTE_SUM", AVG(CREDITS_USED_CLOUD_SERVICES) AS "CREDITS_USED_CLOUD_SERVICES_AVERAGE", SUM(CREDITS_USED_CLOUD_SERVICES) AS "CREDITS_USED_CLOUD_SERVICES_SUM", AVG(CREDITS_USED) AS "CREDITS_USED_AVERAGE", SUM(CREDITS_USED) AS "CREDITS_USED_SUM" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."METERING_HISTORY" WHERE start_time >= DATE_TRUNC(day, CURRENT_DATE()) GROUP BY 1, 2;' > /tmp/snowflake-account-metering.json
+ SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT WAREHOUSE_NAME, AVG(AVG_RUNNING) AS "RUNNING_AVERAGE", AVG(AVG_QUEUED_LOAD) AS "QUEUED_LOAD_AVERAGE", AVG(AVG_QUEUED_PROVISIONING) AS "QUEUED_PROVISIONING_AVERAGE", AVG(AVG_BLOCKED) AS "BLOCKED_AVERAGE" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."WAREHOUSE_LOAD_HISTORY" GROUP BY 1;' > /tmp/snowflake-warehouse-load-history-metrics.json
+ SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT WAREHOUSE_NAME, avg(CREDITS_USED_COMPUTE) as "CREDITS_USED_COMPUTE_AVERAGE", sum(CREDITS_USED_COMPUTE) as "CREDITS_USED_COMPUTE_SUM", avg(CREDITS_USED_CLOUD_SERVICES) as "CREDITS_USED_CLOUD_SERVICES_AVERAGE", sum(CREDITS_USED_CLOUD_SERVICES) as "CREDITS_USED_CLOUD_SERVICES_SUM", avg(CREDITS_USED) as "CREDITS_USED_AVERAGE", sum(CREDITS_USED) as "CREDITS_USED_SUM" from "SNOWFLAKE"."ACCOUNT_USAGE"."WAREHOUSE_METERING_HISTORY" group by 1;' > /tmp/snowflake-warehouse-metering.json
+ SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT table_name, table_schema, avg(ACTIVE_BYTES) as "ACTIVE_BYTES_AVERAGE", avg(TIME_TRAVEL_BYTES) as "TIME_TRAVEL_BYTES_AVERAGE", avg(FAILSAFE_BYTES) as "FAILSAFE_BYTES_AVERAGE", avg(RETAINED_FOR_CLONE_BYTES) as "RETAINED_FOR_CLONE_BYTES_AVERAGE" from "SNOWFLAKE"."ACCOUNT_USAGE"."TABLE_STORAGE_METRICS" group by 1, 2;' > /tmp/snowflake-table-storage-metrics.json
+ SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT STORAGE_BYTES, STAGE_BYTES, FAILSAFE_BYTES FROM "SNOWFLAKE"."ACCOUNT_USAGE"."STORAGE_USAGE" ORDER BY USAGE_DATE DESC LIMIT 1;' > /tmp/snowflake-storage-usage.json
+ SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT USAGE_DATE, AVG(AVERAGE_STAGE_BYTES) FROM "SNOWFLAKE"."ACCOUNT_USAGE"."STAGE_STORAGE_USAGE_HISTORY" GROUP BY USAGE_DATE;' > /tmp/snowflake-stage-storage-usage-history.json
+ SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT DATABASE_NAME, AVG(CREDITS_USED) AS "CREDITS_USED_AVERAGE", SUM(CREDITS_USED) AS "CREDITS_USED_SUM", AVG(BYTES_TRANSFERRED) AS "BYTES_TRANSFERRED_AVERAGE", SUM(BYTES_TRANSFERRED) AS "BYTES_TRANSFERRED_SUM" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."REPLICATION_USAGE_HISTORY" GROUP BY DATABASE_NAME;' > /tmp/snowflake-replication-usage-history.json
+ SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT QUERY_TYPE, WAREHOUSE_NAME, DATABASE_NAME, SCHEMA_NAME, AVG(EXECUTION_TIME) AS "EXECUTION_TIME_AVERAGE", AVG(COMPILATION_TIME) AS "COMPILATION_TIME_AVERAGE", AVG(BYTES_SCANNED) AS "BYTES_SCANNED_AVERAGE", AVG(BYTES_WRITTEN) AS "BYTES_WRITTEN_AVERAGE", AVG(BYTES_DELETED) AS "BYTES_DELETED_AVERAGE", AVG(BYTES_SPILLED_TO_LOCAL_STORAGE) AS "BYTES_SPILLED_TO_LOCAL_STORAGE_AVERAGE", AVG(BYTES_SPILLED_TO_REMOTE_STORAGE) AS "BYTES_SPILLED_TO_REMOTE_STORAGE_AVERAGE" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."QUERY_HISTORY" GROUP BY QUERY_TYPE, WAREHOUSE_NAME, DATABASE_NAME, SCHEMA_NAME;' > /tmp/snowflake-query-history.json
+ SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT PIPE_NAME, AVG(CREDITS_USED) AS "CREDITS_USED_AVERAGE", SUM(CREDITS_USED) AS "CREDITS_USED_SUM", AVG(BYTES_INSERTED) AS "BYTES_INSERTED_AVERAGE", SUM(BYTES_INSERTED) AS "BYTES_INSERTED_SUM", AVG(FILES_INSERTED) AS "FILES_INSERTED_AVERAGE", SUM(FILES_INSERTED) AS "FILES_INSERTED_SUM" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."PIPE_USAGE_HISTORY" GROUP BY PIPE_NAME;' > /tmp/snowflake-pipe-usage.json
+ SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT QUERY_ID, QUERY_TEXT, (EXECUTION_TIME / 60000) AS EXEC_TIME, WAREHOUSE_NAME, USER_NAME, EXECUTION_STATUS FROM "SNOWFLAKE"."ACCOUNT_USAGE"."QUERY_HISTORY" WHERE EXECUTION_STATUS = '\''SUCCESS'\'' ORDER BY EXECUTION_TIME DESC;' > /tmp/snowflake-longest-queries.json
+ SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT EVENT_ID, EVENT_TIMESTAMP, EVENT_TYPE, REPORTED_CLIENT_TYPE, REPORTED_CLIENT_VERSION, FIRST_AUTHENTICATION_FACTOR, SECOND_AUTHENTICATION_FACTOR, IS_SUCCESS, ERROR_CODE, ERROR_MESSAGE FROM "SNOWFLAKE"."ACCOUNT_USAGE"."LOGIN_HISTORY" WHERE IS_SUCCESS = '\''NO'\'';' > /tmp/snowflake-login-failures.json
+ SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT DATABASE_NAME, AVERAGE_DATABASE_BYTES, AVERAGE_FAILSAFE_BYTES FROM "SNOWFLAKE"."ACCOUNT_USAGE"."DATABASE_STORAGE_USAGE_HISTORY" ORDER BY USAGE_DATE DESC LIMIT 1;' > /tmp/snowflake-database-storage-usage.json
+ SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT SOURCE_CLOUD, SOURCE_REGION, TARGET_CLOUD, TARGET_REGION, TRANSFER_TYPE, AVG(BYTES_TRANSFERRED) AS "BYTES_TRANSFERRED_AVERAGE", SUM(BYTES_TRANSFERRED) AS "BYTES_TRANSFERRED_SUM" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."DATA_TRANSFER_HISTORY" GROUP BY 1, 2, 3, 4, 5;' > /tmp/snowflake-data-transfer-usage.json
+ SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT WAREHOUSE_NAME, SUM(CREDITS_USED) AS TOTAL_CREDITS_USED FROM "SNOWFLAKE"."ACCOUNT_USAGE"."WAREHOUSE_METERING_HISTORY" GROUP BY 1 ORDER BY 2 DESC;' > /tmp/snowflake-credit-usage-by-warehouse.json
+ SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT TABLE_NAME, DATABASE_NAME, SCHEMA_NAME, AVG(CREDITS_USED) AS "CREDITS_USED_AVERAGE", SUM(CREDITS_USED) AS "CREDITS_USED_SUM", AVG(NUM_BYTES_RECLUSTERED) AS "BYTES_RECLUSTERED_AVERAGE", SUM(NUM_BYTES_RECLUSTERED) AS "BYTES_RECLUSTERED_SUM", AVG(NUM_ROWS_RECLUSTERED) AS "ROWS_RECLUSTERED_AVERAGE", SUM(NUM_ROWS_RECLUSTERED) AS "ROWS_RECLUSTERED_SUM" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."AUTOMATIC_CLUSTERING_HISTORY" GROUP BY 1, 2, 3;' > /tmp/snowflake-automatic-clustering.json
+ SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'select USER_NAME,EVENT_TYPE,IS_SUCCESS,ERROR_CODE,ERROR_MESSAGE,FIRST_AUTHENTICATION_FACTOR,SECOND_AUTHENTICATION_FACTOR from "SNOWFLAKE"."ACCOUNT_USAGE"."LOGIN_HISTORY";' > /tmp/snowflake-account-details.json
+
+ ```
+
+
+
+ ## nri-flex와 Snowflake 통합 활성화
+
+ Snowflake 통합을 설정하려면 다음 단계를 따르세요.
+
+ 1. 통합 디렉터리에 `nri-snowflake-config.yml` 이라는 파일을 만듭니다.
+
+ ```shell
+
+ touch /etc/newrelic-infra/integrations.d/nri-snowflake-config.yml
+
+ ```
+
+ 2. 에이전트가 Snowflake 데이터를 캡처할 수 있도록 하려면 다음 스니펫을 `nri-snowflake-config.yml` 파일에 추가하세요.
+
+ ```yml
+
+ ---
+ integrations:
+ - name: nri-flex
+ interval: 30s
+ config:
+ name: snowflakeAccountMetering
+ apis:
+ - name: snowflakeAccountMetering
+ file: /tmp/snowflake-account-metering.json
+ - name: nri-flex
+ interval: 30s
+ config:
+ name: snowflakeWarehouseLoadHistory
+ apis:
+ - name: snowflakeWarehouseLoadHistory
+ file: /tmp/snowflake-warehouse-load-history-metrics.json
+ - name: nri-flex
+ interval: 30s
+ config:
+ name: snowflakeWarehouseMetering
+ apis:
+ - name: snowflakeWarehouseMetering
+ file: /tmp/snowflake-warehouse-metering.json
+ - name: nri-flex
+ interval: 30s
+ config:
+ name: snowflakeTableStorage
+ apis:
+ - name: snowflakeTableStorage
+ file: /tmp/snowflake-table-storage-metrics.json
+ - name: nri-flex
+ interval: 30s
+ config:
+ name: snowflakeStageStorageUsage
+ apis:
+ - name: snowflakeStageStorageUsage
+ file: /tmp/snowflake-stage-storage-usage-history.json
+ - name: nri-flex
+ interval: 30s
+ config:
+ name: snowflakeReplicationUsgae
+ apis:
+ - name: snowflakeReplicationUsgae
+ file: /tmp/snowflake-replication-usage-history.json
+ - name: nri-flex
+ interval: 30s
+ config:
+ name: snowflakeQueryHistory
+ apis:
+ - name: snowflakeQueryHistory
+ file: /tmp/snowflake-query-history.json
+ - name: nri-flex
+ interval: 30s
+ config:
+ name: snowflakePipeUsage
+ apis:
+ - name: snowflakePipeUsage
+ file: /tmp/snowflake-pipe-usage.json
+ - name: nri-flex
+ interval: 30s
+ config:
+ name: snowflakeLongestQueries
+ apis:
+ - name: snowflakeLongestQueries
+ file: /tmp/snowflake-longest-queries.json
+ - name: nri-flex
+ interval: 30s
+ config:
+ name: snowflakeLoginFailure
+ apis:
+ - name: snowflakeLoginFailure
+ file: /tmp/snowflake-login-failures.json
+ - name: nri-flex
+ interval: 30s
+ config:
+ name: snowflakeDatabaseStorageUsage
+ apis:
+ - name: snowflakeDatabaseStorageUsage
+ file: /tmp/snowflake-database-storage-usage.json
+ - name: nri-flex
+ interval: 30s
+ config:
+ name: snowflakeDataTransferUsage
+ apis:
+ - name: snowflakeDataTransferUsage
+ file: /tmp/snowflake-data-transfer-usage.json
+ - name: nri-flex
+ interval: 30s
+ config:
+ name: snowflakeCreditUsageByWarehouse
+ apis:
+ - name: snowflakeCreditUsageByWarehouse
+ file: /tmp/snowflake-credit-usage-by-warehouse.json
+ - name: nri-flex
+ interval: 30s
+ config:
+ name: snowflakeAutomaticClustering
+ apis:
+ - name: snowflakeAutomaticClustering
+ file: /tmp/snowflake-automatic-clustering.json
+ - name: nri-flex
+ interval: 30s
+ config:
+ name: snowflakeStorageUsage
+ apis:
+ - name: snowflakeStorageUsage
+ file: /tmp/snowflake-storage-usage.json
+ - name: nri-flex
+ interval: 30s
+ config:
+ name: snowflakeAccountDetails
+ apis:
+ - name: snowflakeAccountDetails
+ file: /tmp/snowflake-account-details.json
+
+ ```
+
+
+
+ ## New Relic 인프라 에이전트 다시 시작
+
+ 인프라 에이전트를 다시 시작하십시오.
+
+ ```shell
+
+ sudo systemctl restart newrelic-infra.service
+
+ ```
+
+ 몇 분 안에 애플리케이션이 메트릭을 [one.newrelic.com](https://one.newrelic.com)으로 보냅니다.
+
+
+
+ ## 데이터 찾기
+
+ `Snowflake` 이라는 사전 제작된 대시보드 템플릿을 선택하여 Snowflake의 그래픽 지표를 모니터링할 수 있습니다. 사전 구축된 대시보드 템플릿을 사용하려면 다음 단계를 따르세요.
+
+ 1. [one.newrelic.com](https://one.newrelic.com) 에서, **+ Add data** 페이지로 이동하세요.
+ 2. 클릭 **Dashboards.**
+ 3. 검색창에 `Snowflake` 를 입력합니다.
+ 4. Snowflake 대시보드가 나타나야 합니다. 클릭해서 설치하세요
+
+ 귀하의 Snowflake 대시보드는 맞춤형 대시보드로 간주되며 **Dashboards** UI 에서 찾을 수 있습니다. 대시보드 사용 및 편집에 대한 문서는 [대시보드 문서 를](/docs/query-your-data/explore-query-data/dashboards/introduction-dashboards) 참조하세요.
+
+ 다음은 Snowflake 지표를 확인하는 NRQL 쿼리입니다.
+
+ ```sql
+
+ SELECT * from snowflakeAccountSample
+
+ ```
+
+
+
+## 다음은 뭐지?
+
+NRQL 쿼리 작성 및 대시보드 생성에 대해 자세히 알아보려면 다음 문서를 확인하세요.
+
+* 기본 및 고급 쿼리를 생성 [하기 위한 쿼리 빌더 소개](/docs/query-your-data/explore-query-data/query-builder/introduction-query-builder)
+* [대시보드를 사용자 지정하고 다양한 작업을 수행하기 위한 대시보드 소개](/docs/query-your-data/explore-query-data/dashboards/introduction-dashboards)
+* 대시보드를 [관리하여 대시](/docs/query-your-data/explore-query-data/dashboards/manage-your-dashboard) 보드 표시 모드를 조정하거나 대시보드에 더 많은 콘텐츠를 추가합니다.
\ No newline at end of file
diff --git a/src/i18n/content/pt/docs/ai-monitoring/intro-to-ai-monitoring.mdx b/src/i18n/content/pt/docs/ai-monitoring/intro-to-ai-monitoring.mdx
index 86a8a0c90c2..2d1ea5d4b40 100644
--- a/src/i18n/content/pt/docs/ai-monitoring/intro-to-ai-monitoring.mdx
+++ b/src/i18n/content/pt/docs/ai-monitoring/intro-to-ai-monitoring.mdx
@@ -5,7 +5,7 @@ freshnessValidatedDate: '2024-11-04T00:00:00.000Z'
translationType: machine
---
-AI Monitoring é a nossa solução para Monitoramento do desempenho de aplicativos (APM) para IA. Ao habilitar o Monitoramento de IA, nosso agente APM pode lhe dar visibilidade de ponta a ponta sobre desempenho, custo e qualidade de [modelos suportados](/docs/ai-monitoring/compatibility-requirements-ai-monitoring) por fornecedores como OpenAI e BedRock. Explore como o usuário interage com um assistente de IA, investigue detalhes em nível de tracesobre a resposta de um modelo a um evento de IA e compare o desempenho de diferentes modelos em ambientes de aplicativos.
+AI Monitoring é a nossa solução para Monitoramento do desempenho de aplicativos (APM) para IA. Ao habilitar o Monitoramento de IA, nosso agente APM pode lhe dar visibilidade de ponta a ponta sobre desempenho, custo e qualidade de [modelos suportados](/docs/ai-monitoring/compatibility-requirements-ai-monitoring) por fornecedores como OpenAI, Bedrock e DeepSeek. Explore como o usuário interage com um assistente de IA, investigue detalhes em nível de tracesobre a resposta de um modelo a um evento de IA e compare o desempenho de diferentes modelos em ambientes de aplicativos.
diff --git a/src/i18n/content/pt/docs/alerts/get-notified/new-relic-webhook-for-microsoft-teams-workflow.mdx b/src/i18n/content/pt/docs/alerts/get-notified/new-relic-webhook-for-microsoft-teams-workflow.mdx
new file mode 100644
index 00000000000..ea675528023
--- /dev/null
+++ b/src/i18n/content/pt/docs/alerts/get-notified/new-relic-webhook-for-microsoft-teams-workflow.mdx
@@ -0,0 +1,171 @@
+---
+title: Webhook do New Relic para fluxo de trabalho do Microsoft Teams
+tags:
+ - Alerts
+ - Incident intelligence
+ - New Relic webhook for Microsoft Teams workflow
+metaDescription: Read about how to add a New Relic webhook for Microsoft Teams workflow.
+freshnessValidatedDate: never
+translationType: machine
+---
+
+A Microsoft está descontinuando o serviço Connectors baseado em webhook do Microsoft 365 no Teams até o final de 2025. Para continuar recebendo notificações de alerta da New Relic, você pode criar um fluxo de trabalho no Microsoft Teams usando o aplicativo fluxo de trabalho. Este documento fornece instruções para atualizar seus destinos de alerta e fluxo de trabalho do New Relic para garantir uma transição tranquila e alertas ininterruptos em seus canais do Teams. Para obter mais informações sobre a descontinuação dos serviços do Office 365 Connectors, consulte [o blog Microsoft Desenvolvedores](https://devblogs.microsoft.com/microsoft365dev/retirement-of-office-365-connectors-within-microsoft-teams/).
+
+**Pré-requisitos:**
+
+* Crie um novo fluxo de trabalho no Microsoft Teams para alertas do New Relic. Depois de criar o fluxo de trabalho, copie o URL da solicitação POST. Você precisará deste URL no New Relic. Para mais informações, consulte a [documentação da Microsoft para criar fluxo de trabalho no Teams](https://support.microsoft.com/en-us/office/create-incoming-webhooks-with-workflows-for-microsoft-teams-8ae491c7-0394-4861-ba59-055e33f75498).
+
+**Para adicionar um webhook do New Relic ao fluxo de trabalho do Microsoft Teams:**
+
+1. Atualizar destino de webhook existente:
+
+ 1. Acesse **[one.newrelic.com > All capabilities](https://one.newrelic.com/all-capabilities) > Alerts > Enrich and Notify > Destinations**.
+ 2. Clique no destino do webhook necessário vinculado ao Microsoft Teams para editar.
+ 3. Após criar o fluxo de trabalho no Teams, no campo **Endpoint URL**, substitua a URL existente por uma nova URL.
+
+ 4. Clique em **Update destination**.
+
+2. Atualizar fluxo de trabalho do webhook existente:
+
+ 1. Acesse **[one.newrelic.com > All capabilities](https://one.newrelic.com/all-capabilities) > Alerts > Enrich and Notify > Workflows**.
+ 2. Para editar a carga de notificação, clique no fluxo de trabalho necessário vinculado ao destino.
+
+ 3. Na tela Edit notification message , no campo **Template**, copie e cole a seguinte carga útil:
+
+ ```json
+ {
+ "type": "message",
+ "attachments": [
+ {
+ "contentType": "application/vnd.microsoft.card.adaptive",
+ "contentUrl": null,
+ "content": {
+ "$schema": "http://adaptivecards.io/schemas/adaptive-card.json",
+ "type": "AdaptiveCard",
+ "version": "1.2",
+ "msteams": { "width": "full" },
+ "body": [
+ {
+ "type": "ColumnSet",
+ "columns": [
+
+ {
+ "type": "Column",
+ "items": [
+ {
+ "type": "Image",
+ "style": "Person",
+ "url": "https://avatars.slack-edge.com/2022-06-02/3611814361970_f6a28959c2e7258660ea_512.png",
+ "size": "Small"
+ }
+ ],
+ "width": "auto"
+ },
+
+ {
+ "type": "Column",
+ "items": [
+ {
+ "type": "TextBlock",
+ "size": "large",
+ "weight": "bolder",
+ "text": "{{ priorityText }} priority issue is {{#if issueClosedAt}}CLOSED{{else}}{{#if issueAcknowledgedAt}}ACKNOWLEDGED{{else}}ACTIVATED{{/if}}{{/if}}"
+ },
+ {
+ "type": "TextBlock",
+ "size": "large",
+ "wrap": "true",
+ "maxLines": "2",
+ "weight": "bolder",
+ "text": "[{{ issueTitle }}]({{ issuePageUrl }})"
+ }
+ ],
+ "width": "stretch"
+ }
+ ]
+ },
+ {{#if accumulations.conditionDescription.[0]}}
+ {
+ "type": "TextBlock",
+ "text": {{ json accumulations.conditionDescription.[0] }},
+ "wrap": true
+ },
+ {{/if}}
+ {{#eq "Not Available" violationChartUrl}}
+ {{else}}
+ {
+ "type": "Image",
+ "url": {{ json violationChartUrl }}
+ },
+ {{/eq}}
+ {
+ "type": "FactSet",
+ "facts": [
+ {
+ "title": "*Impacted entities:*",
+ "value": "{{#each entitiesData.names}}{{#lt @index 5}}{{this}}{{#unless @last}},{{/unless}}{{/lt}}{{/each}}"
+ },
+ {{#if accumulations.policyName }}
+ {
+ "title": "*Policy:*",
+ "value": {{ json accumulations.policyName.[0]}}
+ },
+ {{/if}}
+ {{#if accumulations.conditionName }}
+ {
+ "title": "*Condition:*",
+ "value": {{ json accumulations.conditionName.[0]}}
+ },
+ {{#eq impactedEntitiesCount 1}}
+ {{else}}
+ {
+ "title": "*Total Incidents:*",
+ "value": {{ json impactedEntitiesCount}}
+ },
+ {{/eq}}
+ {{/if}}
+ {
+ "title": "Workflow Name:",
+ "value": {{ json workflowName }}
+ }
+ ]
+ },
+ {
+ "type": "ActionSet",
+ "actions": [
+ {
+ "type": "Action.OpenUrl",
+ "title": "📥 Acknowledge",
+ "url": {{ json issueAckUrl }}
+ },
+ {
+ "type": "Action.OpenUrl",
+ "title": "✔️ Close",
+ "url": {{ json issueCloseUrl }}
+ }
+ {{#if accumulations.deepLinkUrl}}
+ ,{
+ "type": "Action.OpenUrl",
+ "title": "🔎 View Query",
+ "url": {{ json accumulations.deepLinkUrl.[0] }},
+ "mode": "secondary"
+ }
+ {{/if}}
+ {{#if accumulations.runbookUrl}}
+ ,{
+ "type": "Action.OpenUrl",
+ "title": "📕 View Runbook",
+ "url": {{ json accumulations.runbookUrl.[0] }},
+ "mode": "secondary"
+ }
+ {{/if}}
+ ]
+ }
+ ]
+ }
+ }
+ ]
+ }
+ ```
+
+ 4. Clique em **Save message**.
\ No newline at end of file
diff --git a/src/i18n/content/pt/docs/infrastructure/host-integrations/host-integrations-list/snowflake-integration.mdx b/src/i18n/content/pt/docs/infrastructure/host-integrations/host-integrations-list/snowflake-integration.mdx
new file mode 100644
index 00000000000..7dab552c972
--- /dev/null
+++ b/src/i18n/content/pt/docs/infrastructure/host-integrations/host-integrations-list/snowflake-integration.mdx
@@ -0,0 +1,231 @@
+---
+title: Integração Snowflake com flex
+tags:
+ - Snowflake integration
+ - New Relic integrations
+metaDescription: Install our Snowflake dashboards and see your Snowflake data in New Relic.
+freshnessValidatedDate: never
+translationType: machine
+---
+
+Nossa integração com o Snowflake permite que você colete dados abrangentes sobre vários aspectos, incluindo desempenho de consulta, integridade do sistema de armazenamento, status do armazém e informações de faturamento.
+
+
+
+
+ Depois de configurar a integração do Snowflake com New Relic, veja seus dados em painéis como estes, prontos para uso.
+
+
+
+
+ ## Instalar o agente de infraestrutura [#infra-install]
+
+ Para usar a integração do Snowflake, é necessário [instalar também o agente de infraestrutura](/docs/infrastructure/install-infrastructure-agent/get-started/install-infrastructure-agent-new-relic/) no mesmo host. O agente de infraestrutura monitora o próprio host, enquanto a integração que você instalará na próxima etapa amplia seu monitoramento com dados específicos do Snowflake.
+
+
+
+ ## Configurar métrica do Snowflake
+
+ Execute o comando abaixo para armazenar a métrica do Snowflake no formato JSON, permitindo que o nri-flex a leia. Certifique-se de modificar ACCOUNT, USERNAME e SNOWSQL\_PWD adequadamente.
+
+ ```shell
+
+ # Run the below command as a 1 minute cronjob
+ SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT SERVICE_TYPE, NAME, AVG(CREDITS_USED_COMPUTE) AS "CREDITS_USED_COMPUTE_AVERAGE", SUM(CREDITS_USED_COMPUTE) AS "CREDITS_USED_COMPUTE_SUM", AVG(CREDITS_USED_CLOUD_SERVICES) AS "CREDITS_USED_CLOUD_SERVICES_AVERAGE", SUM(CREDITS_USED_CLOUD_SERVICES) AS "CREDITS_USED_CLOUD_SERVICES_SUM", AVG(CREDITS_USED) AS "CREDITS_USED_AVERAGE", SUM(CREDITS_USED) AS "CREDITS_USED_SUM" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."METERING_HISTORY" WHERE start_time >= DATE_TRUNC(day, CURRENT_DATE()) GROUP BY 1, 2;' > /tmp/snowflake-account-metering.json
+ SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT WAREHOUSE_NAME, AVG(AVG_RUNNING) AS "RUNNING_AVERAGE", AVG(AVG_QUEUED_LOAD) AS "QUEUED_LOAD_AVERAGE", AVG(AVG_QUEUED_PROVISIONING) AS "QUEUED_PROVISIONING_AVERAGE", AVG(AVG_BLOCKED) AS "BLOCKED_AVERAGE" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."WAREHOUSE_LOAD_HISTORY" GROUP BY 1;' > /tmp/snowflake-warehouse-load-history-metrics.json
+ SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT WAREHOUSE_NAME, avg(CREDITS_USED_COMPUTE) as "CREDITS_USED_COMPUTE_AVERAGE", sum(CREDITS_USED_COMPUTE) as "CREDITS_USED_COMPUTE_SUM", avg(CREDITS_USED_CLOUD_SERVICES) as "CREDITS_USED_CLOUD_SERVICES_AVERAGE", sum(CREDITS_USED_CLOUD_SERVICES) as "CREDITS_USED_CLOUD_SERVICES_SUM", avg(CREDITS_USED) as "CREDITS_USED_AVERAGE", sum(CREDITS_USED) as "CREDITS_USED_SUM" from "SNOWFLAKE"."ACCOUNT_USAGE"."WAREHOUSE_METERING_HISTORY" group by 1;' > /tmp/snowflake-warehouse-metering.json
+ SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT table_name, table_schema, avg(ACTIVE_BYTES) as "ACTIVE_BYTES_AVERAGE", avg(TIME_TRAVEL_BYTES) as "TIME_TRAVEL_BYTES_AVERAGE", avg(FAILSAFE_BYTES) as "FAILSAFE_BYTES_AVERAGE", avg(RETAINED_FOR_CLONE_BYTES) as "RETAINED_FOR_CLONE_BYTES_AVERAGE" from "SNOWFLAKE"."ACCOUNT_USAGE"."TABLE_STORAGE_METRICS" group by 1, 2;' > /tmp/snowflake-table-storage-metrics.json
+ SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT STORAGE_BYTES, STAGE_BYTES, FAILSAFE_BYTES FROM "SNOWFLAKE"."ACCOUNT_USAGE"."STORAGE_USAGE" ORDER BY USAGE_DATE DESC LIMIT 1;' > /tmp/snowflake-storage-usage.json
+ SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT USAGE_DATE, AVG(AVERAGE_STAGE_BYTES) FROM "SNOWFLAKE"."ACCOUNT_USAGE"."STAGE_STORAGE_USAGE_HISTORY" GROUP BY USAGE_DATE;' > /tmp/snowflake-stage-storage-usage-history.json
+ SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT DATABASE_NAME, AVG(CREDITS_USED) AS "CREDITS_USED_AVERAGE", SUM(CREDITS_USED) AS "CREDITS_USED_SUM", AVG(BYTES_TRANSFERRED) AS "BYTES_TRANSFERRED_AVERAGE", SUM(BYTES_TRANSFERRED) AS "BYTES_TRANSFERRED_SUM" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."REPLICATION_USAGE_HISTORY" GROUP BY DATABASE_NAME;' > /tmp/snowflake-replication-usage-history.json
+ SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT QUERY_TYPE, WAREHOUSE_NAME, DATABASE_NAME, SCHEMA_NAME, AVG(EXECUTION_TIME) AS "EXECUTION_TIME_AVERAGE", AVG(COMPILATION_TIME) AS "COMPILATION_TIME_AVERAGE", AVG(BYTES_SCANNED) AS "BYTES_SCANNED_AVERAGE", AVG(BYTES_WRITTEN) AS "BYTES_WRITTEN_AVERAGE", AVG(BYTES_DELETED) AS "BYTES_DELETED_AVERAGE", AVG(BYTES_SPILLED_TO_LOCAL_STORAGE) AS "BYTES_SPILLED_TO_LOCAL_STORAGE_AVERAGE", AVG(BYTES_SPILLED_TO_REMOTE_STORAGE) AS "BYTES_SPILLED_TO_REMOTE_STORAGE_AVERAGE" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."QUERY_HISTORY" GROUP BY QUERY_TYPE, WAREHOUSE_NAME, DATABASE_NAME, SCHEMA_NAME;' > /tmp/snowflake-query-history.json
+ SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT PIPE_NAME, AVG(CREDITS_USED) AS "CREDITS_USED_AVERAGE", SUM(CREDITS_USED) AS "CREDITS_USED_SUM", AVG(BYTES_INSERTED) AS "BYTES_INSERTED_AVERAGE", SUM(BYTES_INSERTED) AS "BYTES_INSERTED_SUM", AVG(FILES_INSERTED) AS "FILES_INSERTED_AVERAGE", SUM(FILES_INSERTED) AS "FILES_INSERTED_SUM" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."PIPE_USAGE_HISTORY" GROUP BY PIPE_NAME;' > /tmp/snowflake-pipe-usage.json
+ SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT QUERY_ID, QUERY_TEXT, (EXECUTION_TIME / 60000) AS EXEC_TIME, WAREHOUSE_NAME, USER_NAME, EXECUTION_STATUS FROM "SNOWFLAKE"."ACCOUNT_USAGE"."QUERY_HISTORY" WHERE EXECUTION_STATUS = '\''SUCCESS'\'' ORDER BY EXECUTION_TIME DESC;' > /tmp/snowflake-longest-queries.json
+ SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT EVENT_ID, EVENT_TIMESTAMP, EVENT_TYPE, REPORTED_CLIENT_TYPE, REPORTED_CLIENT_VERSION, FIRST_AUTHENTICATION_FACTOR, SECOND_AUTHENTICATION_FACTOR, IS_SUCCESS, ERROR_CODE, ERROR_MESSAGE FROM "SNOWFLAKE"."ACCOUNT_USAGE"."LOGIN_HISTORY" WHERE IS_SUCCESS = '\''NO'\'';' > /tmp/snowflake-login-failures.json
+ SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT DATABASE_NAME, AVERAGE_DATABASE_BYTES, AVERAGE_FAILSAFE_BYTES FROM "SNOWFLAKE"."ACCOUNT_USAGE"."DATABASE_STORAGE_USAGE_HISTORY" ORDER BY USAGE_DATE DESC LIMIT 1;' > /tmp/snowflake-database-storage-usage.json
+ SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT SOURCE_CLOUD, SOURCE_REGION, TARGET_CLOUD, TARGET_REGION, TRANSFER_TYPE, AVG(BYTES_TRANSFERRED) AS "BYTES_TRANSFERRED_AVERAGE", SUM(BYTES_TRANSFERRED) AS "BYTES_TRANSFERRED_SUM" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."DATA_TRANSFER_HISTORY" GROUP BY 1, 2, 3, 4, 5;' > /tmp/snowflake-data-transfer-usage.json
+ SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT WAREHOUSE_NAME, SUM(CREDITS_USED) AS TOTAL_CREDITS_USED FROM "SNOWFLAKE"."ACCOUNT_USAGE"."WAREHOUSE_METERING_HISTORY" GROUP BY 1 ORDER BY 2 DESC;' > /tmp/snowflake-credit-usage-by-warehouse.json
+ SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'SELECT TABLE_NAME, DATABASE_NAME, SCHEMA_NAME, AVG(CREDITS_USED) AS "CREDITS_USED_AVERAGE", SUM(CREDITS_USED) AS "CREDITS_USED_SUM", AVG(NUM_BYTES_RECLUSTERED) AS "BYTES_RECLUSTERED_AVERAGE", SUM(NUM_BYTES_RECLUSTERED) AS "BYTES_RECLUSTERED_SUM", AVG(NUM_ROWS_RECLUSTERED) AS "ROWS_RECLUSTERED_AVERAGE", SUM(NUM_ROWS_RECLUSTERED) AS "ROWS_RECLUSTERED_SUM" FROM "SNOWFLAKE"."ACCOUNT_USAGE"."AUTOMATIC_CLUSTERING_HISTORY" GROUP BY 1, 2, 3;' > /tmp/snowflake-automatic-clustering.json
+ SNOWSQL_PWD='Replaceme' snowsql -o output_format=json -o remove_comments=true -o header=true -o timing=false -o friendly=false -a -u -q 'select USER_NAME,EVENT_TYPE,IS_SUCCESS,ERROR_CODE,ERROR_MESSAGE,FIRST_AUTHENTICATION_FACTOR,SECOND_AUTHENTICATION_FACTOR from "SNOWFLAKE"."ACCOUNT_USAGE"."LOGIN_HISTORY";' > /tmp/snowflake-account-details.json
+
+ ```
+
+
+
+ ## Habilite a integração do Snowflake com nri-flex
+
+ Para configurar a integração do Snowflake, siga estas etapas:
+
+ 1. Crie um arquivo chamado `nri-snowflake-config.yml` no diretório integração:
+
+ ```shell
+
+ touch /etc/newrelic-infra/integrations.d/nri-snowflake-config.yml
+
+ ```
+
+ 2. Adicione o trecho a seguir ao arquivo `nri-snowflake-config.yml` para permitir que o agente capture dados do Snowflake:
+
+ ```yml
+
+ ---
+ integrations:
+ - name: nri-flex
+ interval: 30s
+ config:
+ name: snowflakeAccountMetering
+ apis:
+ - name: snowflakeAccountMetering
+ file: /tmp/snowflake-account-metering.json
+ - name: nri-flex
+ interval: 30s
+ config:
+ name: snowflakeWarehouseLoadHistory
+ apis:
+ - name: snowflakeWarehouseLoadHistory
+ file: /tmp/snowflake-warehouse-load-history-metrics.json
+ - name: nri-flex
+ interval: 30s
+ config:
+ name: snowflakeWarehouseMetering
+ apis:
+ - name: snowflakeWarehouseMetering
+ file: /tmp/snowflake-warehouse-metering.json
+ - name: nri-flex
+ interval: 30s
+ config:
+ name: snowflakeTableStorage
+ apis:
+ - name: snowflakeTableStorage
+ file: /tmp/snowflake-table-storage-metrics.json
+ - name: nri-flex
+ interval: 30s
+ config:
+ name: snowflakeStageStorageUsage
+ apis:
+ - name: snowflakeStageStorageUsage
+ file: /tmp/snowflake-stage-storage-usage-history.json
+ - name: nri-flex
+ interval: 30s
+ config:
+ name: snowflakeReplicationUsgae
+ apis:
+ - name: snowflakeReplicationUsgae
+ file: /tmp/snowflake-replication-usage-history.json
+ - name: nri-flex
+ interval: 30s
+ config:
+ name: snowflakeQueryHistory
+ apis:
+ - name: snowflakeQueryHistory
+ file: /tmp/snowflake-query-history.json
+ - name: nri-flex
+ interval: 30s
+ config:
+ name: snowflakePipeUsage
+ apis:
+ - name: snowflakePipeUsage
+ file: /tmp/snowflake-pipe-usage.json
+ - name: nri-flex
+ interval: 30s
+ config:
+ name: snowflakeLongestQueries
+ apis:
+ - name: snowflakeLongestQueries
+ file: /tmp/snowflake-longest-queries.json
+ - name: nri-flex
+ interval: 30s
+ config:
+ name: snowflakeLoginFailure
+ apis:
+ - name: snowflakeLoginFailure
+ file: /tmp/snowflake-login-failures.json
+ - name: nri-flex
+ interval: 30s
+ config:
+ name: snowflakeDatabaseStorageUsage
+ apis:
+ - name: snowflakeDatabaseStorageUsage
+ file: /tmp/snowflake-database-storage-usage.json
+ - name: nri-flex
+ interval: 30s
+ config:
+ name: snowflakeDataTransferUsage
+ apis:
+ - name: snowflakeDataTransferUsage
+ file: /tmp/snowflake-data-transfer-usage.json
+ - name: nri-flex
+ interval: 30s
+ config:
+ name: snowflakeCreditUsageByWarehouse
+ apis:
+ - name: snowflakeCreditUsageByWarehouse
+ file: /tmp/snowflake-credit-usage-by-warehouse.json
+ - name: nri-flex
+ interval: 30s
+ config:
+ name: snowflakeAutomaticClustering
+ apis:
+ - name: snowflakeAutomaticClustering
+ file: /tmp/snowflake-automatic-clustering.json
+ - name: nri-flex
+ interval: 30s
+ config:
+ name: snowflakeStorageUsage
+ apis:
+ - name: snowflakeStorageUsage
+ file: /tmp/snowflake-storage-usage.json
+ - name: nri-flex
+ interval: 30s
+ config:
+ name: snowflakeAccountDetails
+ apis:
+ - name: snowflakeAccountDetails
+ file: /tmp/snowflake-account-details.json
+
+ ```
+
+
+
+ ## Reinicie o agente do New Relic Infrastructure
+
+ Reinicie seu agente de infraestrutura.
+
+ ```shell
+
+ sudo systemctl restart newrelic-infra.service
+
+ ```
+
+ Em alguns minutos, seu aplicativo enviará métricas para [one.newrelic.com](https://one.newrelic.com).
+
+
+
+ ## Encontre seus dados
+
+ Você pode escolher nosso modelo dashboard pré-construído chamado `Snowflake` para monitor seu aplicativo métrica Snowflake. Siga estas etapas para usar nosso modelo dashboard pré-construído:
+
+ 1. De [one.newrelic.com](https://one.newrelic.com), vá para a página **+ Add data** .
+ 2. Clique em **Dashboards.**
+ 3. Na barra de pesquisa, digite `Snowflake`.
+ 4. O dashboard do Snowflake deve aparecer. Clique nele para instalá-lo
+
+ Seu dashboard Snowflake é considerado um painel personalizado e pode ser encontrado na interface **Dashboards**. Para obter documentos sobre como usar e editar o painel, consulte [nossos documentos dashboard ](/docs/query-your-data/explore-query-data/dashboards/introduction-dashboards).
+
+ Aqui está uma consulta NRQL para verificar a métrica do Snowflake:
+
+ ```sql
+
+ SELECT * from snowflakeAccountSample
+
+ ```
+
+
+
+## Qual é o próximo?
+
+Para saber mais sobre como construir uma consulta NRQL e gerar um painel, confira estes documentos:
+
+* [Introdução ao criador de consulta](/docs/query-your-data/explore-query-data/query-builder/introduction-query-builder) para criação de consultas básicas e avançadas.
+* [Introdução aos dashboards](/docs/query-your-data/explore-query-data/dashboards/introduction-dashboards) para personalizar seu dashboard e realizar diversas ações.
+* [Gerencie seu dashboard](/docs/query-your-data/explore-query-data/dashboards/manage-your-dashboard) para ajustar o modo de exibição dos painéis ou para adicionar mais conteúdo ao seu dashboard.
\ No newline at end of file