Connectors
Connect without
a JVM, ever.
The streaming industry runs Kafka Connect on a JVM. We don’t. Every connector is natively compiled Rust shipped inside the same binary as Iggy: no JVM heap, no GC pauses, no external runtimes. Activate from the Console, map stream payloads, deploy in seconds.
8
sink connectors
4
source connectors
Rust
native · no JVM
0
external runtimes
Architecture
Source. Stream. Sink. That's it.
Connectors run as a separate process managed by Warden on the same node, connecting to Iggy locally over TCP. Connector traffic stays on your infrastructure; it never transits an external network.
External Source
Source
Iggy + Connectors Runtime
Rust native · same node
Sink
External Sink
Connector traffic stays on the node; data never transits an external network
Source connectors
Pull data from external systems and produce messages into Iggy streams. PostgreSQL, Elasticsearch, InfluxDB, and Random (for testing).
Sink connectors
Consume messages from Iggy streams and push them to external destinations: databases, search engines, analytics tables, and HTTP endpoints.
Multiple instances
Activate multiple instances of the same connector: for example, two PostgreSQL sinks writing to different databases simultaneously.
Sink Connectors
Send data anywhere.
Eight production-ready sink plugins for routing Iggy messages to databases, search engines, analytics platforms, and HTTP endpoints.
PostgreSQL
Relational DBPush messages to PostgreSQL databases. Map stream payloads to table columns with configurable schema transforms.
Elasticsearch
SearchIndex messages into Elasticsearch clusters. Real-time search and analytics over your streaming data.
Apache Iceberg
AnalyticsWrite messages to Apache Iceberg tables for analytics. First-class lakehouse integration with schema evolution.
Quickwit
SearchIndex messages into the Quickwit search engine. Sub-second full-text search over high-volume streams.
MongoDB
Document DBPush messages to MongoDB collections. Flexible document mapping for event-driven architectures.
InfluxDB
Time SeriesWrite messages to InfluxDB time-series databases. Native support for metrics and time-series workloads.
HTTP
WebhookSend messages to any HTTP endpoint. Webhooks, REST APIs, or custom ingestion pipelines, no code required.
Stdout
Dev / DebugOutput messages to standard output. Ideal for debugging connector pipelines and development workflows.
Source Connectors
Ingest from anywhere.
Four source plugins to pull from external systems into Iggy streams, plus a Random source for load testing and development.
PostgreSQL
Relational DBIngest data from PostgreSQL databases into Iggy streams. CDC-style ingestion for change-driven architectures.
Elasticsearch
SearchIngest data from Elasticsearch into Iggy streams. Replay stored documents as streaming events.
InfluxDB
Time SeriesIngest data from InfluxDB time-series databases into Iggy streams. Time-series replay and real-time forwarding.
Random
TestingGenerate random test messages for development and load testing. Spin up realistic workloads without external dependencies.
Lifecycle
Activate from the Console. No code.
Browse the connector catalog in your deployment's Console tab, click Activate, configure stream mappings, and the platform handles the rest: provisioning on all nodes, monitoring, and lifecycle management.
Instance states
Instance created, waiting for nodes to activate.
Running and processing messages.
Disabled, configuration preserved for re-enable.
Encountered errors. Check logs and retry.
Built-in connector monitoring
Per-instance metrics are available in the Console's Metrics tab and via API: messages produced, consumed, processed, error count, CPU and memory usage.
Connect your data stack today.
Join the LaserData preview and activate your first connector in minutes: no code, no external runtimes, no infrastructure to manage.