norsk-sdk package
Classes
| Class | Description |
|---|---|
see: NorskInput.browser() |
|
A processor that takes video frames as input and produces two outputs: the original input frames passed through, and browser-rendered frames synchronized to each input frame’s metadata. The HTML page receives frame metadata via a 'norsk-frame' CustomEvent and must call window.cefQuery({request: 'ready'}) when the DOM update is complete. A 'norsk-context' event is dispatched on context changes. |
|
A CMAF ingest input node that receives fMP4 segments via HTTP PUT/POST. Once created, the ingest endpoint accepts segments at: - |
|
SDI capture through a DeckLink card. see: NorskInput.deckLink(). |
|
SDI capture through a Deltacast card. see: NorskInput.deltaCast(). |
|
A media node that can run a generic AI model for tasks like object detection. |
|
see: NorskInput.fileMp4() |
|
see: NorskInput.fileTs() |
|
see: NorskOutput.fileTs() |
|
see: NorskInput.fileWav() |
|
see: . |
|
see: NorskOutput.whep() |
|
A media node that uses the Gemini Live API to continuously analyse video/audio, dispatching tool calls and text in real time. |
|
A session for interactively planning a LiveSpec via AI. This wraps a bidirectional gRPC stream for the Live Plan API, allowing you to submit a query, refine the proposed spec, and accept the final result. |
|
see: . |
|
see: |
|
see: NorskInput.moq() |
|
see: NorskOutput.moq() |
|
MXL Input Node - reads media from an MXL shared memory domain. see: NorskInput.mxl() |
|
MXL Output Node - writes media to an MXL shared memory domain. see: NorskOutput.mxl() |
|
NDI Capture see: NorskInput.ndi(). |
|
see: NorskOutput.ndi() |
|
The entrypoint for all Norsk Media applications |
|
see: NorskInspect.qualityMonitoring() QualityMonitoring is a ProcessorNode — it analyses incoming video frames, annotates them with quality decisions, and forwards the annotated frames to downstream subscribers (e.g. a QualityReporting node). |
|
A media node that evaluates video frames against a ReasoningSpec. Subscribes to a video source, sends frames to the evaluation LLM, and emits tool calls and responses as events. |
|
A session for interactively planning a ReasoningSpec via AI. This is NOT a media node. It wraps a bidirectional gRPC stream for the Plan API, allowing you to submit a query, refine the proposed spec, and accept the final result. |
|
see: NorskOutput.rtmp() |
|
see: NorskInput.rtp() |
|
see: NorskDuplex.sip() |
|
Spectrum input inference node — subscribes to a video stream, samples frames, and streams auto-detection results (colour primaries, transfer function, etc.). see: NorskControl.spectrumInputInference() |
|
Spectrum video processor - applies an expression-based processing pipeline to video. Supports resize, crop, compose, color grading, tone mapping, LUT, transfer function, and gamut mapping operations. see: NorskTransform.spectrum() |
|
see: NorskInput.srt() |
|
see: NorskOutput.srt() |
|
see: NorskInput.srtRaw() |
|
see: |
|
see: NorskInput.udpTs() |
|
see: NorskOutput.udpTs() |
|
see: NorskOutput.whep() |
|
see: NorskInput.whip() |
|
see: NorskOutput.whip() |
Functions
| Function | Description |
|---|---|
Returns the stream keys for ancillary streams in a media context |
|
Filters a context to only the ancillary streams within it |
|
Returns the stream keys for audio streams in a media context |
|
Filters a context to only the audio streams within it |
|
Reverse-map a protobuf ColorPrimaries enum value to a SDK string type |
|
Extract all source pin names from a spectrum expression tree. Source nodes with no name default to "video". |
|
Construct a waveform representing a string of DTMF digits. This is of finite duration, you may wish to sequence silence afterwards |
|
Generate encryption parameters from from an encryption KeyID and Key, in the form KEYID:KEY, both 16byte hexadecimal |
|
Provided for compatibilty, this is just e.g. |
|
Returns the stream keys for playlist streams in a media context |
|
Filters a context to only the playlist streams within it |
|
Validation function to require at least one audio and at least one video stream. Often the default validation will happen to ensure this, as audio and video are subscribed from separate media nodes, but when one media node will produce both audio and video, default validation cannot know that both are required. |
|
Validation function to require exactly N audio and exactly M video streams. Often the default validation will happen to ensure this, as audio and video are subscribed from separate media nodes, but when one media node will produce both audio and video, default validation cannot know that both are required. |
|
Select all the streams from the input |
|
Select all the ancillary data streams from the input |
|
Select all the audio streams from the input |
|
Select all the audio and video streams from the input |
|
Select all the subtitle streams from the input |
|
Select all the video streams from the input |
|
Create a selector selecting all the video streams from the input with the specified rendition name |
|
Convert a SpectrumExpression to its protobuf representation |
|
Compares two stream keys by value, returning true if the stream keys refer to the same stream |
|
Returns the stream keys for subtitle streams in a media context |
|
Filters a context to only the subtitle streams within it |
|
Reverse-map a protobuf TransferFunction enum value to a SDK string type |
|
Reverse-map a protobuf VideoRange enum value to a SDK string type |
|
Returns the stream keys for video streams in a media context |
|
Filters a context to only the video streams within it |
|
Interfaces
| Interface | Description |
|---|---|
Settings for an AAC encode see: NorskTransform.audioEncode() |
|
Settings for a AV1 Encode using AmdMA35D hardware A detailed description of these params can be found on the AmdMA35D Encoder Documentation These fields have deliberately been written to maintain the same semantics as the AmdMA35D documentation where possible. If left undefined, all will default to AmdMA35D’s own defaults |
|
Common settings for a HEVC and H264 Encodes using AmdMA35D hardware A detailed description of these params can be found on the AmdMA35D Encoder Documentation. Note that in accordance with the MA35D documentation, all bitrates are kbps These fields have deliberately been written to maintain the same semantics as the AmdMA35D documentation where possible. If left undefined, all will default to AmdMA35D’s own defaults |
|
Settings for a H264 Encode using AmdMA35D hardware A detailed description of these params can be found on the AmdMA35D Encoder Documentation These fields have deliberately been written to maintain the same semantics as the AmdMA35D documentation where possible. If left undefined, all will default to AmdMA35D’s own defaults |
|
Settings for a HEVC Encode using AmdMA35D hardware A detailed description of these params can be found on the AmdMA35D Encoder Documentation These fields have deliberately been written to maintain the same semantics as the AmdMA35D documentation where possible. If left undefined, all will default to AmdMA35D’s own defaults |
|
Settings for a H264 Encode using AmdU30 hardware A detailed description of these params can be found on the AmdU30 Encoder Documentation These fields have deliberately been written to maintain the same semantics as the AmdU30 documentation where possible. If left undefined, all will default to AmdU30’s own defaults |
|
Settings for a HEVC Encode using AmdU30 hardware A detailed description of these params can be found on the AmdU30 Encoder Documentation These fields have deliberately been written to maintain the same semantics as the AmdU30 documentation where possible. If left undefined, all will default to AmdU30’s own defaults |
|
Settings for an Ancillary node see NorskTransform.ancillary() |
|
Settings for an Audio Build Multichannel Node see: NorskTransform.audioBuildMultichannel() |
|
Settings for an AudioDecode operation see: NorskTransform.audioDecode() |
|
Settings for an audio encode see: NorskTransform.audioEncode() |
|
Audio energy analysis data from the aggregated local analysis. |
|
Audio energy tool configuration. |
|
Settings for an Audio Gain node see: NorskTransform.audioGain() |
|
An update operation for an Audio Gain node see: AudioGainNode.updateConfig() |
|
Settings for an AudioMeasureLevelsNode see: NorskControl.audioMeasureLevels() |
|
Settings for the Audio Mix Matrix Node see: NorskTransform.audioMixMatrix() |
|
Config update for the AudioMixMatrixNode. Call AudioMixMatrixNode.updateConfig() for updating the config. |
|
The settings for an AudioMix operation see: NorskTransform.audioMix() |
|
An update operation for an AudioMix node see: AudioMixNode.updateConfig() |
|
The settings for a single source within an AudioMix operation see: NorskTransform.audioMix() |
|
Settings for an Audio Signal Generator see: NorskInput.audioSignal() |
|
Settings for an Audio Split Multichannel node see: NorskTransform.audioSplitMultichannel() |
|
Settings for an Audio Transcribe operation using AWS see: NorskTransform.audioTranscribeAws() |
|
Settings for an audio transcribe/translate operation using Azure Speech Service see: NorskTransform.audioTranscribeAzure() |
|
Settings for an Audio Transcribe operation using Whisper sdk (whisper-cpp) see: NorskTransform.audioTranscribeWhisper() |
|
Settings for whisper-cpp VAD model see: AudioTranscribeWhisperSettings |
|
Configuration for pushing a segmented media stream directly to AWS S3 |
|
There are three possible modes: - "abr": encode in average bitrate mode, specified in kilobits/sec (note, 1 kilobit is 1000 bits). You can make use of the vbv settings to control the bounds on how much the actual bitrate can fluctuate within the bounds of the average - "cqp": encode in constant quantizer mode. In general, crf will give better results, although cqp can be faster to encode - "crf": encode in constant rate factor mode. This will give a constant 'quality' to the encode, but with a variable bitrate |
|
Settings for a Browser Application (Chromium Embedded Framework - CEF). Multiple browser input nodes will share an underlying CEF instance if the BrowserAppSettings are the same |
|
Settings for a Browser Input see: NorskInput.browser() |
|
A settings update for a running browser see: BrowserInputNode.updateConfig() |
|
Settings for a BrowserRenderNode see: NorskTransform.browserRender() |
|
Settings for an Caption Transform operation see: NorskTransform.captionTransform() |
|
Settings for updating a Caption Transform operation at runtime see: CaptionTransformNode.updateConfig() |
|
Settings for a CMAF Audio Output see NorskOutput.cmafAudio() |
|
Settings for a CMAF ingest input. Receives fMP4 init and media segments via HTTP PUT/POST. |
|
Settings for a CMAF Multi Variant Playlist see NorskOutput.cmafMultiVariant() |
|
Settings for a CMAF Audio and Video Outputs see NorskOutput.cmafAudio(), NorskOutput.cmafVideo() |
|
Settings for a CMAF Video Output see NorskOutput.cmafVideo() |
|
Settings for a CMAF WebVTT Output see NorskOutput.cmafWebVtt() |
|
A layer to be composited on top of the base image |
|
Settings to configure a DeckLink (BlackMagic) output for SDI/HDMI playback see NorskOutput.deckLink() |
|
Settings to control SDI capture through a Deltacast card see: NorskInput.deltaCast() |
|
A single detected object from object detection analysis. |
|
Drop a number of frames after a certain N frames have already been accepted |
|
Drop every N frames from an incoming video stream |
|
Randomly drop frames on a stream - 0.0 means don’t drop any frames - 1.0 means drop every single frame |
|
Drop the first N frames from an incoming video stream |
|
Settings for audio-input models |
|
Represents a single output tensor from an inference operation. |
|
Settings for video-input models |
|
LLM response time statistics emitted after each evaluation response. |
|
An event that the evaluation LLM can raise. The declaration (name, description, fields) is sent to the planning LLM so it can design the spec around these events. At evaluation time, events are emitted via onLLMResponse. |
|
A single field definition for a event declaration. |
|
Settings for an image file source see: NorskInput.fileImage() |
|
Information about an Mp4 File |
|
Settings for an File Based Mp4 Input see: NorskInput.fileMp4() |
|
Settings for updating a file-based Mp4 Input see: FileMp4InputNode.updateSettings() |
|
Settings to control MP4 file output see NorskOutput.fileMp4() |
|
Information about an Mp4 File track (i.e. an audio or video stream) |
|
The settings for an output Transport Stream written to file see: NorskOutput.fileTs() |
|
Settings for an File Based WAV Input see: NorskInput.fileWav() |
|
Settings to control WAV file output see NorskOutput.fileWav() |
|
Settings to control MP4 file output see NorskOutput.fileMp4() |
|
Settings for Gemini Video |
|
Options that affect automatic conversion behaviour (e.g. decode steps inserted when subscribing between nodes). Only supplied fields are changed; omitted fields retain their current (or default) values. Changes take effect on future subscriptions only. |
|
Settings for an HlgToSdrNode see: |
|
Configuration for pushing a segmented media stream directly to a generic http server |
|
Settings for a HLS TS Audio Output see NorskOutput.hlsTsAudio() |
|
Settings for a HLS Transport Stream Combined Push Output see NorskOutput.hlsTsCombinedPush() |
|
Settings for a Hls Ts Multivariant Playlist see NorskOutput.hlsTsMultiVariant() |
|
Settings for a HLS TS Video Output see NorskOutput.hlsTsVideo() |
|
The settings for a Image Preview Output see NorskOutput.imagePreview() |
|
Base settings for most input nodes |
|
Shared configuration for inputs that have a start-up threshold that is to say, they drop data until the input has stopped bursting/settled down |
|
Settings to subtitles inspect node see NorskInspect.subtitles() |
|
A time interval measured as ticks / (ticks per second) This represents Norsk’s internal arbitrary precision timestamp. See addInterval(), intervalToMs(), |
|
Shared configuration for outputs that use a jitter buffer |
|
Settings for a Jitter Buffer see: NorskTransform.jitterBuffer() |
|
Information about a channel within a license |
|
License information, as reported by the Kantar Snap Embedder; see NorskKantarEmbedder.kantarEmbedder() |
|
OfflineLicense definition for the NorskKantarEmbedder.kantarEmbedder() |
|
The base OfflineLicense interface for the NorskKantarEmbedder.kantarEmbedder() |
|
OnlineLicense definition for the NorskKantarEmbedder.kantarEmbedder() |
|
The base OnlineLicense interface for the NorskKantarEmbedder.kantarEmbedder() |
|
Online License information, as reported by the Kantar Snap Embedder; see NorskKantarEmbedder.kantarEmbedder() |
|
Settings for an Kantar Audio Watermark Embedder node see: NorskKantarEmbedder.kantarEmbedder() |
|
Event raised by the Kantar Snap Embedder; see NorskKantarEmbedder.kantarEmbedder() |
|
Kantar version information |
|
Settings for a LiveEvaluateNode. The spec carries the configuration from the live plan phase. |
|
Token usage statistics from a live evaluate turn. |
|
Video configuration for the live evaluate node. |
|
Settings for creating a live plan session. |
|
The specification produced by the Live Plan phase. Describes how to configure a Live API session for real-time video analysis. |
|
Combined LLM response data emitted after each evaluation cycle. |
|
The standard settings for any node reading from a file |
|
Configuration for the serving of segments and playlists directly from the Norsk Web Server Note: While this is both useful for local testing and for sitting behind a reverse caching proxy / CDN it is not expected that Norsk serve as the edge server in most scenarios |
|
Settings for a H264 Encode using Netint Logan hardware A detailed description of these params can be found on the Netint Logan Encoder Documentation These fields have deliberately been written to maintain the same semantics as the Logan documentation where possible. If left undefined, all will default to Logan’s own defaults |
|
Settings for a HEVC Encode using Netint Logan hardware A detailed description of these params can be found on the Netint Logan Encoder Documentation These fields have deliberately been written to maintain the same semantics as the Logan documentation where possible. If left undefined, all will default to Logan’s own defaults |
|
Settings for an LutProcessorNode see: |
|
Settings for Media Store playback see: NorskMediaStore.player() |
|
Settings to configure a media store recorder see NorskMediaStore.recorder() |
|
Settings to configure a media store snapshot see NorskMediaStore.snapshot() |
|
Settings for a MetadataCombineNode, see NorskTransform.metadataCombine() |
|
A metadata message as carried in a Transport Stream |
|
Settings for metrics node see |
|
Settings for a MoQ ingest listener that accepts publisher connections. |
|
Settings for starting a QUIC listener for direct MoQT subscriber connections. Multiple egests on the same port share a single listener via the listener pool. |
|
Settings for connecting to a MoQT relay (e.g. CDN primary or backup). |
|
Settings to configure a Moq Egest see NorskOutput.moq() |
|
Tunable aggregation weights for the MQA quality-metrics pipeline. Both sub-objects are optional — the server falls back to the defaults defined in Avp.Types.QualityMetrics for any missing piece. |
|
Weights driving the transport-stream error aggregation inside the composite MQA-VIDEO score. All values are non-negative per-event penalty weights. |
|
Weights controlling the MQA-VIDEO composite score aggregation. The score starts at 100 and subtracts a penalty per metric, clamped to [0..100]. See the PureScript QualityMetrics module for the exact math. |
|
Settings to configure an MXL (Media eXchange Layer) input. Reads v210 video and float32 audio from an MXL domain. see: NorskInput.mxl() |
|
Configuration for a single MXL output flow (becomes an input pin on the node). |
|
Settings to configure an MXL (Media eXchange Layer) output. Writes v210 video and float32 audio to an MXL domain for consumption by other MXL-enabled applications. see: NorskOutput.mxl() |
|
Settings common to all media nodes |
|
Methods that allow you to manipulate color space, transfer functions etc in your video streams |
|
Methods that allow you to control and monitor media streams |
|
Methods that allow you to both ingest and egest media from your application at the same time |
|
Methods that allow you to inspect media streams, for monitoring, decisioning or debug purposes. |
|
Methods that allow you to embed audio watermarks into your media streams |
|
Methods to interact with the Media Store live-to-vod recording engine |
|
Methods that allow you to egest media from your application |
|
Top level Norsk configuration |
|
Methods that allow you query and update the features of the system that Norsk is running in |
|
TODO see: NorskTransform.videoEncode() |
|
Methods that allow you to manipulate your media streams |
|
Per-architecture knobs for NR-IQA models on the QualityMonitoring node. Only the sampling rate lives here; hysteresis thresholds have moved to QualityReportingVideoSettings. |
|
Settings for a H264 Encode using Nvidia hardware A detailed description of these params can be found on the Nvidia Encoder Documentation If left undefined, all will default to Nvidia’s own defaults If a preset is configured, then all will default to the values provided by that preset |
|
Settings for a HEVC Encode using Nvidia hardware A detailed description of these params can be found on the Nvidia Encoder Documentation If left undefined, all will default to Nvidia’s own defaults If a preset is configured, then all will default to the values provided by that preset |
|
The rate control options for an nvidia encode For further info, consult the Nvidia Encoder docs |
|
Object detection analysis data from the aggregated local analysis. |
|
Object detection tool configuration. |
|
A rectangle used for describing a subset of an image |
|
Settings for an Opus encode see: NorskTransform.audioEncode() |
|
A transition for a video composition part. A transition interpolates the source_rect, dest_rect, and opacity properties over the specified duration according to the specified easing function. As a special case, if a transition is specified and the input pin of the part changes, an opacity fade from one to the other will occur. |
|
Settings for an AV1 Encode using Netint Quadra hardware A detailed description of these params can be found on the Netint Quadra Encoder Documentation These fields have deliberately been written to maintain the same semantics as the Quadra documentation where possible. If left undefined, all will default to Quadra’s own defaults |
|
Settings for a H264 Encode using Netint Quadra hardware A detailed description of these params can be found on the Netint Quadra Encoder Documentation These fields have deliberately been written to maintain the same semantics as the Quadra documentation where possible. If left undefined, all will default to Quadra’s own defaults |
|
Settings for a HEVC Encode using Netint Quadra hardware A detailed description of these params can be found on the Netint Quadra Encoder Documentation These fields have deliberately been written to maintain the same semantics as the Quadra documentation where possible. If left undefined, all will default to Quadra’s own defaults |
|
Black-frame detection configuration for the QualityMonitoring node. Only operational parameters live here; hysteresis thresholds have moved to QualityReportingVideoSettings. |
|
Dynamic configuration update for the QualityMonitoring node. Only NR-IQA sample rate can be changed at runtime. |
|
Frozen-frame detection configuration for the QualityMonitoring node. Presence enables freeze-detect processing; all hysteresis configuration has moved to QualityReportingVideoSettings. |
|
Settings for the QualityMonitoring node. see NorskInspect.qualityMonitoring() |
|
Video-side quality monitoring configuration. Each detection is individually configurable; omit to disable. |
|
Dynamic configuration update for the QualityReporting node. All fields are optional — only provided fields are updated; omitted fields keep their current values. |
|
Settings for the QualityReporting node. see NorskInspect.qualityReporting() |
|
Video-side quality reporting configuration. Each detection carries its own hysteresis thresholds; omit to disable that detection. |
|
Settings for a ReasoningEvaluateNode. The spec carries the prompt and state from the plan phase - evaluate just needs the spec, a provider, and video configuration. |
|
Settings for creating a reasoning plan session. Two modes of operation: - Event mode: provide |
|
The specification produced by the planning phase. - state: opaque JSON designed by the planning LLM for the evaluation LLM to track temporal context. Norsk does not interpret this. - prompt: the instruction sent to the evaluation LLM alongside each frame and the current state. - explanation: optional reasoning from the planning LLM about its design choices (why it chose certain state fields, etc.) - localAnalysis: optional list of local analysis tools to enable - includeFrame: whether to include JPEG frames in LLM calls (default true) Internally, the SDK implementation may carry additional handler state but this is not part of the public interface. |
|
Video configuration for the evaluate node |
|
Base settings for any input node requiring access to a host:port pair |
|
The resolution of a video within Norsk |
|
Status update emitted when the LLM provider returns a retryable error (e.g. 429 rate limit). Sent before each retry attempt so the user has real-time visibility into transient failures. |
|
The settings for an RTMP output see: NorskOutput.rtmp() |
|
Settings to control how RTMP streams can be included as sources in your media workflow see: NorskInput.rtmpServer() |
|
A description of an Eac3 stream being delivered via RTP |
|
A description of an H264 stream delivered over RTP |
|
A description of an HEVC stream delivered over RTP |
|
Settings for an RTP input see: NorskInput.rtp() |
|
A description of a LinearPCM stream being delivered via RTP |
|
A description of a Mpeg4 Generic Aac stream |
|
A description of an incoming RTP stream |
|
This is the SAR/PAR for a video stream and is an expression of what shape each pixel has within a video stream x:1, y:1 being a square and the most common value for this |
|
Scene change detection data from the aggregated local analysis. |
|
Scene change detection tool configuration. |
|
Settings for an SeiInjectionNode. see: NorskTransform.seiInjection() |
|
Settings for a SIP session |
|
Settings for a SourceTime node see NorskTransform.streamSync() |
|
Result from Spectrum input inference analysis |
|
Settings for a SpectrumInputInference node. Analyzes video input to detect colour primaries, transfer function, video range, and other metadata with confidence scoring. |
|
Settings for a Spectrum video processor node see: NorskTransform.spectrum() |
|
Settings for an SRT Input node see: NorskInput.srt() |
|
Settings for an SRT Input node see: NorskInput.srt() |
|
The settings for an SRT output see: NorskOutput.srt() |
|
Settings for an SRT Raw Input node see: NorskInput.srtRaw() |
|
Settings to create an ST2110 Nic see: NorskSystemST2110.nic() |
|
Settings to create an ST2110 NMOS Node see: NorskSystemST2110.node() |
|
Settings to create an ST2110 Output see: |
|
Settings for a StreamAlign node This will reset all streams to the same framerates/sample rates and align their timestamps so that they completely line up for downstream operations see NorskTransform.streamAlign() |
|
The settings for a Chaos Monkey see: NorskTransform.streamChaosMonkey() |
|
Settings for a Stream Key Override see: NorskTransform.streamKeyOverride() |
|
Settings for a Stream Metadata Override Node see: NorskTransform.streamMetadataOverride() |
|
Settings for a Stream Statistics Node see: NorskControl.streamStatistics() |
|
Settings for the Hard Stream Switch see: NorskControl.streamSwitchHard() |
|
Settings for the Smooth Source Switch see NorskControl.streamSwitchSmooth() |
|
Settings for a StreamSync node see NorskTransform.streamSync() |
|
Settings for a Stream Timestamp Nudge see: NorskTransform.streamTimestampNudge() |
|
Settings to control stream timestamp report see NorskInspect.streamTimestampReport() |
|
Aggregated local analysis data emitted at the configured analysis interval. |
|
Settings for an Subtitle Convert operation see: NorskTransform.subtitleTranslateAws() |
|
An individual timed subtitle fragment (usually a word) |
|
Settings for an DVB Subtitle to image operation see: NorskTransform.subtitleToImage() |
|
Settings for an Audio Transcribe operation using AWS see: NorskTransform.subtitleTranslateAws() |
|
Settings for a UDP Transport Stream input see: NorskInput.udpTs() |
|
The settings for an output Transport Stream over UDP see: NorskOutput.udpTs() |
|
An update request for credentials on a CMAF output |
|
Voice activity detection tool configuration. |
|
Settings for a VideoCompose node |
|
An update operation for a VideoCompose operation see: VideoComposeNode.updateConfig() |
|
A single rung in a video encode ladder see: NorskTransform.videoEncode() |
|
Settings for a VideoEncode operation see: NorskTransform.videoEncode() |
|
Settings for an Video Testcard Generator see: NorskInput.videoTestCard() |
|
Settings for a Video Transform node see: NorskTransform.videoTransform() |
|
Voice activity detection data from the aggregated local analysis. |
|
Settings for a WebRTC browser session see: NorskDuplex.webRtcBrowser() |
|
Settings to control WebSocket output see NorskOutput.webSocket() |
|
The settings for a WebRTC WHEP Output see NorskOutput.whep() |
|
The settings for a WebRTC Whip Output see NorskOutput.whip() |
|
X265 codec |
Type Aliases
| Type Alias | Description |
|---|---|
Channel layout for an AAC audio stream |
|
Settings for an AAC raw bitstream codec (no ADTS/LATM encapsulation) |
|
Aggregation strategy for local analysis tools. |
|
Compositing blend mode |
|
CabrMode, please see CABR documentation for details |
|
Channel layout for an audio stream |
|
Possible destinations for a segmented media stream - HlsPushDestinationSettings: Push to a generic HTTP server - AwsS3PushDestinationSettings: Push to Amazon S3 - LocalPullDestinationSettings: Serve directly from the Norsk Web Server |
|
Color primaries. Note: sRGB uses the same primaries as BT.709. Use "bt709" for sRGB primaries. BT.2100 uses the same primaries as BT.2020. Use "bt2020" for BT.2100 primaries. |
|
The return result of a compose callback, directing which pixels to place where |
|
Crop fill mode |
|
CTA-608 format (note this may be embedded in 708 according to output container) Aka CEA-608/EIA-608. Note restrictions on 608 captions - 31 characters per line, a low data rate of about 60 characters per second, in particular impacting any attempt to pop-on captions with revisions (eg adding word by word) |
|
CTA-708 format (this means native 708, not the embedding of 608 in 708). Note some providers claim 608/708 support but are unclear if this means true 708 or merely 608 embedded in 708. |
|
A decibel (dB). A null value represents -inf. |
|
Which ancillary data format (if any) the SDI input should decode. Orthogonal to subtitles — SCTE-104 can in principle coexist with OP47/CEA on the same signal, although this combination is rare in practice. * "none" — do not decode any ancillary data * "scte104" — SCTE-104 splice commands (SMPTE 2010, DID=0x41 SDID=0x07). Decoded messages are exposed as SCTE-35 |
|
Status of a DeckLink output |
|
Hardware profile for DeckLink cards that support multiple sub-device configurations. Cards like the DeckLink Duo 2 and Quad 2 can be configured to expose different numbers of sub-devices with different duplex modes. |
|
Which subtitle format (if any) the SDI input is expected to carry. A real SDI signal will usually carry at most one subtitle format; mixing OP47 teletext with SMPTE 334 CDP (CEA-708/608) is technically possible but rare and unsupported by Norsk. Configuring this explicitly means the input only advertises the subtitle context for the chosen format, so downstream consumers see a single, unambiguous subtitle stream. |
|
Deinterlace method |
|
Various options for de-interlacing video either in software or hardware (where available) |
|
The data type of the elements in a tensor. |
|
The union of all possible input type configurations for the EmbeddedAINode |
|
The pixel format that the model expects. |
|
Field order for interlaced content |
|
Frame rate conversion method |
|
A relative change in decibels, expressing a power ratio. A value of 0dB means no change, positive values mean an increase in power, and negative values mean a decrease in power. |
|
Gamut mapping method |
|
Gemini status enumeration |
|
Status events from the live evaluate session. |
|
Union of all reasoning provider configurations |
|
Configuration for a single local analysis tool (discriminated union on |
|
LUT interpolation method |
|
Result of a MoQ connection callback |
|
Lifecycle event for a single upstream MoQT relay an egest is publishing to. See MoqOutputSettings.onRelayStateChange. |
|
Settings for an MP3 codec (stream contains metadata in mp3 header) |
|
Named channel layout for an audio stream |
|
Colour of noise pixels in white noise mode |
|
See the Nvidia Encoder Docs for a description of this value |
|
See the Nvidia Encoder Docs for a description of this value |
|
See the Nvidia Encoder Docs for a description of this value |
|
See the Nvidia Encoder Docs for a description of this value |
|
See the Nvidia Encoder Docs for a description of this value |
|
See the Nvidia Encoder Docs for a description of this value |
|
See the Nvidia Encoder Docs for a description of this value |
|
See the Nvidia Encoder Docs for a description of this value |
|
See the Nvidia Encoder Docs for a description of this value |
|
Return type to enable control of an RTMP stream once media arrives on it |
|
Authentication for OpenAI Realtime |
|
Settings for an LPCM codec |
|
What is the source of program-date-time in HLS/Dash outputs? |
|
How should the system handle discontinuities in media playlists? |
|
Settings for influencing the encode pipeline built for a quadra rung within a ladder |
|
The algorithm used for rescales in quadra pipeline, in ffmpeg parlance this is 0 = default 1 = filterblit 2 = bicubic |
|
NR-IQA (no-reference image quality assessment) configuration. Pick exactly one model. The Spectrum preprocessor builds whatever per-scale ImageNet-normalised RGB float32 tensors the chosen model needs (DBCNN: a single 224²; LAR-IQA: 384² + 1280²; MUSIQ: 224² + 384² + 640²). scoreThreshold is interpreted on the common normalised [0..100] scale, regardless of the model’s native output range. |
|
Claude provider configuration for reasoning |
|
Gemini provider for reasoning |
|
Gemini provider configuration for reasoning |
|
Local reasoning provider configuration |
|
OpenAI provider configuration for reasoning |
|
Union of all reasoning provider configurations |
|
Vertex AI provider for reasoning |
|
Resize interpolation algorithm. Note: Lanczos is implemented in the underlying Spectrum engine but intentionally not exposed here - the current implementation is too slow for production use. It will be re-exposed once optimised. Use "cubic" for the best quality/speed trade-off in the meantime. |
|
The stream keys in an RTMP input stream |
|
Audio sample rate, in Hz |
|
Schema type for event fields and response definitions. Maps to LLM tool parameter types (JSON Schema subset). |
|
Errors found while subscribing to a particular source, separated out by reason: - - - - - |
|
Spectrum expression tree describing a video processing pipeline. Each expression takes a source (another expression) and transforms it. |
|
The return value for the callback determining what to do with an incoming stream |
|
Stability mode for jitter buffer - how to determine when input is stable |
|
Errors found while setting up subscriptions, separated out by reason: - - - - |
|
Determines what to do with an incoming context - true/accept: Allow the incoming context through, and any subsequent/queued data that belongs to it - false/deny: Deny the incoming context, if no context has been accepted, then queue data until one is - accept_and_terminate: Allow the incoming context, then deny further data, flush and shut down the node this is useful for cleanly terminating outputs when the context is empty - deny_and_queue: Deny the incoming context, and revert to the original queueing behaviour as if no context has been accepted this is useful when switching from one full context to another, avoiding any "in-between" - deny_and_drop: Deny the incoming context, drop any currently queued data, and drop any further data that might be received this is useful if you have a lot of setup on start-up and would prefer not to queue data while waiting for that to take place |
|
The format to convert subtitles to |
|
Teletext subtitles format. |
|
Tone mapping algorithm. - |
|
Transfer function (OETF/EOTF). Note: BT.2100 uses "pq" (HDR10) or "hlg" (HLG) transfer functions with BT.2020 primaries. Use "gamma26" for DCI-P3 gamma 2.6. |
|
Video chaos effect mode |
|
Settings for a VideoDecode operation see: NorskTransform.videoDecode() |
|
Video range |
|
Describes a waveform for use by audio signal generator. Constructed from simple sine waves, silence and sequences, loops, combinations of those. May be finite or infinite in duration, it may often make sense to follow a timed signal with silence. See mkDtmf() to construct DTMF tones. |
|
WebVTT format Also used as a generic "full text cue" transport for conversion to other output formats. |
|
The return value for the WhipInputSettings.onConnection callback determining what to do with an incoming stream |
|
See the X264 Docs for a description of this value |
|
Three possible values: - "none": specify no HRD information - "vbr": specify HRD information - "cbr": specify HRD information and pack the bitstream to the bitrate specified See the X264 Docs for a further description of this value |
|
See the X264 Docs for a description of this value |
|
See the X264 Docs for a description of this value |
|
See the X265 Docs for a description of this value |
|
See the X265 Docs for a description of this value |
|
See the X265 Docs for a description of this value |
|
See the X265 Docs for a description of this value |
|
See the X265 Docs for a description of this value |