country_code

NVCF Configuration for Azure Monitor#

Add Telemetry Endpoint#

Note

If you have an existing application insights and log analytics workspace, navigate to Overview, followed by JSON View to capture the connection string.

Azure Monitor Setup

  • Create a new Application Insights Instance

    • Go to Azure. Select Monitor, followed by Application Insights.

../_images/azure_monitor_application_insights.png
  • Under Application Insights, select View, followed by + Create.

    • Choose appropriate values for Subscription, Resource Group, and Log Analytics Workspace

../_images/azure_application_insights.png

Select Review + create

Navigate to Overview, followed by JSON View and then capture the ConnectionString Value

../_images/json_view.png

NVCF Create Telemetry Endpoint#

Documentation for creating a Telemetry Endpoint can be found here.

../_images/ngc_cloud_functions_settings.png
  • Under Telemetry Endpoints, select + Add Endpoint

../_images/ngc_telemetry_endpoint_add_endpoint.png

Provide an appropriate Name under Endpoint Details. We are using azure-monitor-endpoint

../_images/ngc_telemetry_endpoint_details.png
  • Select Azure Monitor

  • From the copied connection string value from Azure, paste the following values:

Endpoint (IngestionEndpoint)

https://xxxx-x.in.applicationinsights.azure.com/

Instrumentation Key

xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx

Live Endpoint

https://xxxx.livediagnostics.monitor.azure.com/

Application ID

xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx

  • Select Logs, followed by Metrics under Telemetry Type

  • Select HTTP for the communication protocol

../_images/ngc_telemetry_endpoint_configuration.png
  • Select Save Configuration

  • Via the CLI, run the following command:

curl -s --location --request POST
'https://api.ngc.nvidia.com/v2/nvcf/telemetries' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer '$NVCF_TOKEN \
--data '{
    "endpoint": "YOUR_AZURE_MONITOR_ENDPOINT",
    "protocol": "HTTP",
    "provider": "AZURE_MONITOR",
    "types": [
        "LOGS",
        "METRICS"
    ],
    "secret": {
        "name": "YOUR_NVCF_TELEMETRY_NAME",
        "value": {
            "instrumentationKey": "YOUR_INSTRUMENTATION_KEY",
            "liveEndpoint": "YOUR_LIVE_ENDPOINT",
            "applicationId": "YOUR_APPLICATION_ID"
        }
    }
}'

Get Telemetry ID#

Once you have created the telemetry endpoint, we need to capture the telemetryId for the created Azure Monitor telemetry on NVCF. This is required to create the function via the CLI/Script.

Note

This step is not required for creating the function via the NVCF UI.

echo NVCF_TOKEN="nvapi-xxxxxxxxxxxxxxxxxxxxxx"

Run the following command to get the telemetryId of the created Azure Monitor endpoint:

curl -s --location --request GET
'https://api.ngc.nvidia.com/v2/nvcf/telemetries' \
 --header 'Content-Type: application/json' \
 --header 'Authorization: Bearer '$NVCF_TOKEN'' | jq

Copy the telemetryId field for the created azure-monitor-endpoint:

"telemetryId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
"name": "azure-monitor-endpoint",
"endpoint": xxx
.
.
"createdAt":xxx

Store the value in a variable called Telemetry_ID:

export TELEMETRY_ID="xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"

Environment Variables#

The BYOO implementation uses environment variables to control Vector behavior and configuration. These variables are set during NVCF function deployment and control how the container processes logs.

Environment Variable

Possible Values

Function

VECTOR_OTEL_ACTIVE

TRUE ; FALSE / Not set

When TRUE: Container uses Vector for log processing and forwarding to NVCF collector. When FALSE or unset: Container bypasses Vector and runs Kit application directly via /entrypoint.sh

VECTOR_CONF_B64

Base64 encoded string

Provides custom Vector configuration via base64-encoded string

  • If you provide your VECTOR_CONF_B64 value the entrypoint decodes and uses your custom Vector configuration.

  • When not provided, it uses the default configuration from vector.toml which is copied to the path /opt/vector/static_config.toml inside the container.

To base64 encode, use the following command:

base64 -w 0 vector.toml

Container to Function Flow#

  • Via the CLI:

curl -s -v --location --request POST 'https://api.ngc.nvidia.com/v2/nvcf/functions' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer '$NVCF_TOKEN'' \
--data '{
    "name": "'${STREAMING_FUNCTION_NAME:-usd-composer}'",
    "inferenceUrl": "'${STREAMING_START_ENDPOINT:-/sign_in}'",
    "inferencePort": '${STREAMING_SERVER_PORT:-49100}',
    "health": {
        "protocol": "HTTP",
        "uri": "/v1/streaming/ready",
        "port": '${CONTROL_SERVER_PORT:-8111}',
        "timeout": "PT10S",
        "expectedStatusCode": 200
},
"containerImage": "'$STREAMING_CONTAINER_IMAGE'",
"apiBodyFormat": "CUSTOM",
"description": "'${STREAMING_FUNCTION_NAME:-usd-composer}'",
"functionType": "STREAMING",
"containerEnvironment": [
    {"key": "NVDA_KIT_NUCLEUS", "value": "'$NUCLEUS_SERVER'"},
    {"key": "OMNI_JWT_ENABLED", "value": "1"},
    {"key": "VECTOR_OTEL_ACTIVE", "value": "TRUE"},
    {"key": "NVDA_KIT_ARGS", "value":
"--/app/livestream/nvcf/sessionResumeTimeoutSeconds=300"}
  ],
  “telemetries”: {
    “logsTelemetryId”: “'$TELEMETRY_ID'”,
    "metricsTelemetryId": "'$TELEMETRY_ID'"
}
  • Via the UI:

../_images/function_configuration.png

Or if you provide custom B64:

../_images/environment_variables.png
../_images/telemetry_configuration.png

Confirm Telemetry on Azure Monitor#

Sample KQL query:

For Logs:

AppTraces
| where Properties.function_id == "xxxxxxxxxxxx"
../_images/app_traces.png

For Metrics:

AppMetrics
| where Properties.function_id == "xxxxxxxxxxxx"
../_images/app_metrics.png