OUR RESEARCH FOCUS

AI for the Real-World

REQUEST DEMO
We built Blinkin with one simple belief - getting help at work should be as easy as pressing a button.
Hand holding smartphone showing paused video of a man with options to discover product, compare products, and ask questions.Hand holding smartphone displaying a video of a man with options to discover product, compare products, and ask questions.
Flag of the European Union with twelve yellow stars arranged in a circle on a blue background.
Backed by the European Union

Blinkin VLM - Dense & Mixture-of-Experts

Beyond Text and Pixels: How Blinkin VLM Works

WATCH DEMO
AI models today can do a lot - write code, create images, and generate text. Tools like OpenAI’s GPT-4 and Google’s Gemini show how far language and image models have come. But most of them still treat text and visuals separately.Blinkin VLM takes the next step. It connects text and images, making sense of how they relate to each other.
Diagram illustrating a multimodal learning architecture with ViT and RoBERTa encoders for image and text, a transformer multimodal encoder, and a Llama 3 text decoder, showing connections and loss functions.

The Problem with Single-Modality Superstars

KNOW MORE
Today’s best AI models are strong but usually focused on one type of data. Language models handle text well because they’re trained on massive amounts of writing. Image models are great with visuals because they’ve been trained on billions of pictures. They work as specialists.But the real world isn’t like that. We don’t separate text, images, and instructions,  we combine them. When we read a flowchart, we use both the diagram and the text to understand it. When we look at a scientific image, we connect the labels to the visual details. This is the challenge Blinkin VLM was built to solve. It's not just a language model with an image encoder slapped on; it's a true visionary-linguistic model, engineered from the ground up to fuse information from diverse modalities and reason across them. Blinkin VLM is not just another contender; it's a new class of AI.
Food web diagram showing producers like trees and various consumers including pika, red-breasted nuthatch, Douglas's squirrel, mule deer, coyote, mountain lion, and bobcat with arrows indicating feeding relationships.

How Blinkin VLM Works: Self-Supervised Multimodal Learning

WATCH DEMO
Blinkin VLM uses a self-supervised training approach. Instead of depending on large, human-labeled datasets, it learns directly from raw data. It’s like a student figuring things out by practice rather than being given the answers.The model is trained with objectives that push it to build a shared understanding of both visuals and text. This lets it connect language and images in a more natural way.
Table listing Objective ID, Modality, Pretraining Objective, and SSL Loss Function with entries showing unimodal and multimodal approaches such as Masked Language Modeling (LMLM), Masked Image Modeling (LMIM), and Image-Text Matching (LITM-CE).

Blinkin VLM Builds a Shared Understanding

Mathematical equations for masked language modeling loss and masked image modeling loss, showing expected negative log probabilities with masked tokens or patches given unmasked context.Mathematical formula for LVTD showing expectation of the log probability of the OCR target word given the remaining OCR words and a patch.Mathematical equations defining the L_R M loss function involving an expectation and sum of squared norm differences.Mobile screen showing a URL customization interface with the editable URL https://app.blinkin.io/jacob/myblink and a black Save button below it.Mathematical formula for cross-entropy loss used in image-text matching, showing expectation over data distribution of y log and (1-y) log terms involving function h theta of w and v.Mobile screen showing a webhook settings interface with input fields for webhook URLs, a token, toggle switch on, and options to add, view deliveries, or delete.Mobile analytics dashboard showing total views and CTA clicks of 2.2k with 664% increase, 22% completion and bounce rates, average session duration of 10 minutes 60 seconds, and traffic sessions by device type with desktop 189, tablet 43, and mobile 234, plus a line graph of sessions over time.
Masked Text & Image Modeling (LMLM & LMIM)

Blinkin VLM begins with standard training tasks. In masked language modeling, parts of a sentence are hidden and the model predicts the missing words. In masked image modeling, parts of an image are hidden and the model reconstructs them. These tasks help the model build strong text and image representations separately. A student encoder works with the masked inputs, while a teacher encoder sees the full data.

Visual Token Decoding (VTD)

Connecting the Dots -  Visual Token Decoding trains Blinkin VLM to align visuals with text. For example, given a diagram and its description, the model predicts the missing labels (“visual tokens”) using both text and image data. This builds a strong connection between what it reads and what it sees, enabling cross-modal reasoning.

Location-aware Region Modeling (LLRM)

This objective trains Blinkin VLM to predict the contents of a masked region in an image using both surrounding visuals and text. For example, it can link a phrase like “the fox” to the correct part of an image. This helps the model handle detailed diagrams where specific components matter, such as scientific illustrations.

Location-aware Region Modeling

This trains Blinkin VLM to connect text with specific parts of an image. Given a masked region, the model uses surrounding visuals and a description to predict what’s missing. For example, if the text says “the fox,” it learns to pinpoint the right spot in the image. This fine-grained mapping is especially useful for diagrams and detailed visuals.

Image-Text Matching (LITM−CE)

Image-Text Matching: Ensuring Coherence. In this step, the model sees image–text pairs and decides if they match. This pushes it to align visuals and descriptions at a global level — not just recognizing objects or words, but understanding whether the image as a whole fits the text.

Webhook Integrations

Connect to tools like HubSpot, Notion, Airtable, Salesforce, or your project stack.

Analytics Dashboard

Track views, completions, drop-offs, and engagement insights across all your forms, videos, and AI chats - in real time.

CREATE YOUR BLINKIN NOW

A Powerful Teacher-Student Architecture

KNOW MORE
Blinkin VLM trains with a teacher-student setup. The student encoder sees masked or modified inputs, while the teacher encoder works with the full data. The student then learns by matching the teacher’s outputs, using Smooth L1 Loss to reduce differences. This process helps the model stay accurate even when parts of the data are missing or altered.
Mathematical formula showing the smooth loss function with conditional expression involving y_teacher, f_t, absolute difference, and beta parameters.
Comparison table of WarmUp and Pretraining settings including architecture details, dataset names, training samples, epochs, batch size, learning rate, and OD/OCR confidence threshold.
Blinkin VLM goes beyond theory with practical applications across fields. In research, it can explain complex diagrams, identify key components, and generate supporting text. In education, it helps students grasp textbook visuals in subjects like chemistry and physics. For technical teams, it can create or correct documentation from schematics, saving time and effort. It also enables information retrieval by letting users search with images or diagrams instead of text. By linking visuals and language through self-supervised learning, Blinkin VLM offers a more natural way to understand and use information.
WATCH DEMO

Real-World Impact: From Diagrams to Discovery

Two men in a mechanical room inspecting pipes and control panels, one using a smartphone to record or photograph wiring inside an electrical box.
WATCH DEMO

Blinkin AI solving real challenges

At the core of Blinkin VLM is the Data-to-Sequence Tokenizer, a universal translator that converts any data, text, images, video, audio, or even medical scans, into a single unified sequence of tokens.This sequence flows into the Unified Multimodal Model, the "brain" of Blinkin VLM. By processing all modalities together, the model uncovers patterns, connections, and hidden relationships, producing a powerful Semantic Embedding, its distilled understanding of the input. Unlike traditional AI, which needs separate models for different data types, Blinkin VLM can process an X-ray, a patient’s chart, and even a doctor’s voice notes in one stream, delivering a truly holistic view and unprecedented fusion of information.
Diagram illustrating Blinkin VLM integration with various data types including 2D and 3D vision, natural language, audio, infrared, video, tabular data, time series, molecular graphs, hyperspectrum, inertial measurement, and medical x-ray applications.
WATCH DEMO
The true power of Blinkin VLM lies in its ability to handle an "any-to-any" challenge. It can take any combination of inputs and generate any combination of outputs. This capability is not just a technological gimmick; it's a paradigm shift with real-world implications that are truly mind-bending.

A Universe of Applications: From Stock Markets to Surgery

Diagram illustrating Blinkin VLM's unified multimodal model processing various data types like text, audio, image, video, and tables through a data-to-sequence tokenizer to generate semantic embedding for applications including stack analysis, RGB imaging, graph analysis, weather forecasting, autonomous driving, nighttime cameras, remote sensing, activity recognition, speech recognition, medical diagnosis, 3D recognition, and test understanding.
Applications of Blinkin VLM
  • Financial Analysis: Blinkin VLM can look at stock data, news articles, analyst commentary, and even satellite images of factories or ports at the same time. By connecting these pieces, it can spot patterns - for example, linking reduced factory activity with news reports to anticipate changes in stock prices.
  • Autonomous Navigation: Self-driving cars rely on many inputs like live video, radar, and thermal images. Blinkin VLM can bring these together, helping the vehicle detect pedestrians in the rain, cars in fog, or sudden braking ahead, while also reading signs and signals. This improves safety and decision-making on the road.
  • Environmental Monitoring: Blinkin VLM can process weather data (Time Series), and sensor readings (Graph) to monitor environmental changes.
  • Beyond the Known: The possibilities are truly endless. Blinkin VLM can power intelligent assistants that not only understand your voice but also your gestures (IMU) and facial expressions, creating a more natural and empathetic interaction. It can analyze social media networks (Graph) in conjunction with user posts (Text, Image) to predict cultural trends and understand the spread of information. It can even be used to generate new art, music, and literature by drawing inspiration from all forms of media simultaneously.
What sets Blinkin VLM apart is its ability to capture what lies between the lines. Traditional AI might identify objects in a coffee shop,  a table, a chair, a cup. Blinkin VLM goes further: by combining text and images, it grasps the deeper idea of a "third place", a welcoming hub for community and culture.This ability to link abstract concepts across different types of data allows it to move past simple recognition and toward genuine comprehension. It doesn’t just catalog what’s there; it understands the relationships, context, and meaning that bind them together.
WATCH DEMO

The Latent Truth: Deeper Than the Surface

Man inspecting and recording video of wiring inside an electrical control panel in an industrial mechanical room.
WATCH DEMO

From Chaos to Clarity: How Blinkin VLM Converts Unstructured into Structured

In engineering, manufacturing, and scientific research, logbooks hold critical information, handwritten notes, sensor readings, error codes, and diagrams that capture the full history of a machine, experiment, or system. The problem is that this data often sits in unstructured formats, making it hard for machines to process or use effectively.Blinkin VLM addresses this by transforming logbook data into structured, actionable insights. Instead of simply scanning documents, it can read, interpret, and connect the information, enabling faster problem-solving and more informed decision-making.
Flowchart showing a multi-agent LogBook analysis system starting with user input and ending in three parallel autonomous agents for syntactic & semantic extraction, programmatic numerical analysis, and semantic grounding & verification, utilizing remote and local MCP tools including SoTA vision-language models, document OCR, LLM code generation, scientific computation, and anomaly detection with logos for Gemini, Azure, WolframAlpha, and others.
Multi-Agent Architecture of Blinkin VLM
  • Extract & Understand – The system reads logbooks using OCR and structured document intelligence. It identifies not just the text but also the layout, tables, and diagrams, capturing both content and context.
  • Analyze & Compute – It processes numerical data and charts, performing calculations, tolerance checks, and anomaly detection to reveal issues that might otherwise go unnoticed.
  • Verify & Contextualize – Insights are cross-checked against engineering specifications, external references, and historical records, ensuring every finding is accurate, relevant, and tied to practical solutions.
Blinkin VLM is not a single model trying to do everything at once. Instead, it uses a multi-agent system, where each agent specializes in a specific task but works together toward one goal: comprehensive logbook analysis.

Beyond the Obvious

The strength of Blinkin VLM’s multi-agent system lies in how the agents collaborate. They communicate continuously, sharing context and validating findings. For example, the Numerical Analysis agent can request clarification from the Semantic Extraction agent, while the Verification agent can cross-check insights with both. This creates a cycle of collective reasoning, where every angle is examined until a reliable conclusion is reached.
WATCH DEMO
Multi-Agent Architecture of Blinkin VLM
Consider a real-world case: a complex industrial machine suddenly shuts down. Instead of an engineer manually searching through hundreds of logbook entries, Blinkin VLM processes the data instantly:
  • Extracts the error code and timestamp.
  • Analyzes sensor readings, detecting a sharp temperature rise before the failure.
  • Verifies this against maintenance records, finding a similar incident tied to a bearing issue the year before.
  • Verifies this against maintenance records, finding a similar incident tied to a bearing issue the year before.
Rather than replacing human expertise, Blinkin VLM enhances it, acting as a partner that uncovers patterns, recalls past cases, and accelerates problem-solving. Its value comes not from being a larger model, but from being a smarter, specialized, and collaborative one.
Since 2020, Blinkin VLM has been building beyond perception and generative AI, creating a multi-agent intelligence platform that truly collaborates. Agents like Semantic Extraction, Numerical Analysis, and Verification constantly interact, cross-check, and validate each other’s findings. It’s this cycle of collective reasoning that ensures insights are accurate, reliable, and well-rounded.But we’re not just reasoning in abstraction anymore.

Today, Blinkin VLM is evolving into the realm of Physical AI - where agents don’t just perceive the world; they understand, plan, and act in it, paving the way for applications across industries, from autonomous vehicles to customer service and beyond.
WATCH DEMONVIDIA presentation slide showing AI development timeline starting in 2020 from Perception AI to Physical AI, alongside a photo of two men smiling at an NVIDIA CES event entrance.

From Collective Reasoning to Physical Intelligence