Legacy process historians (PI, IP21, etc.) were built to address 1980s constraints: high‑frequency time-series compression, proprietary protocol connectivity, and specialized trend and visualization UIs. Those constraints have evolved. Modern storage, open industrial protocols, and embedded analytics make it possible to modernize the “historian layer” without sacrificing continuity or operational safety. This paper outlines a pragmatic, risk-managed approach to evolve from a historian-centric architecture to a modern hybrid industrial data architecture that includes edge data connectivity (OIBus) and a cloud industrial data platform (OIAnalytics).
Why historian-centric architectures are being challenged?
Storage performance is no longer the bottleneck
Traditional historians delivered value through proprietary compression and efficient time-series retrieval. Today, high-performance time-series databases and cloud infrastructure provide scalable ingestion and compression at lower cost, with broader interoperability. The historian’s “time-series storage advantage” is less differentiating; the competitive edge shifts to how quickly data becomes usable across the organization.
Connectivity is converging on standards
Industrial data connectivity has consolidated around vendor-neutral approaches such as OPC UA, MQTT, and APIs, along with open connector ecosystems. When connectivity is standardized at the source or edge, the historian is no longer a mandatory integration hub.
The real value is context — enabling use cases and analytics — not just raw tags.
Most organizations don’t struggle to collect tags. They struggle to:
align tags with equipment, product, and recipe context
compute reliable KPIs consistently across sites
link continuous signals with events, batches, cycles, quality, maintenance, genealogy…
operationalize analytics (alerts, golden batches, anomaly detection, investigations…)
That requires a contextualization layer and a consistent semantic model, not just a time-series repository with a basic asset framework.
The Optimistik approach: modernize with OIBus + OIAnalytics
OIBus: open, modular edge connectivity
OIBus is a lightweight, modular, open-source data collection and transmission layer designed for industrial environments. It allows you to:
collect data from OT/IT sources (PLCs, DCS, historians, MES/ERP, IoT)
support streaming patterns (South connectors → Engine → North connectors)
deploy on Windows/Linux, virtualized environments, and Docker
buffer and route data to on‑prem or cloud targets
archive raw data locally to preserve an immutable copy when required (audit/traceability)
OIAnalytics: contextualization and self-service operational intelligence
OIAnalytics is an operational intelligence platform for process industries. It combines:
a contextualization engine (standard data models + continuous data processing)
self-service dashboards and reporting, with no need for code or SQL
configurable business applications (OEE/KPIs, SPC, batch analytics, investigation analytics, optimization analytics, AI powered assistant)
AI capabilities including OIAssistant (an LLM copilot for contextualized Q&A and guided configuration), Advanced Analytics (guided ML and statistical workflows for root-cause analysis and process optimization), AI Notebook (hands-on advanced analysis for operational users), and ML/AI Deployment for running Python models for real-time optimization and anomaly detection.
Its core role is to transform heterogeneous industrial data into a shared, trusted, reusable data repository for plant and corporate teams.
A modern architecture typically separates responsibilities:
Control & safety (SCADA/DCS/PLC): remains authoritative for real-time control and process safety.
Edge data connectivity (OIBus): collects, buffers, standardizes, and routes data.
Contextualization and self-service operational intelligence (OIAnalytics): models business context, computes KPIs, and enables investigations, alerts, reporting, analytics, and AI deployment.
Ecosystem (optional): data analytics and data science tools, enterprise BI, CMMS/EAM, data warehouse…
This separation reduces coupling and avoids relying on the historian UI as the “only way” to understand operations.
Migration strategies by site profile
Scenario A — Sites without a traditional historian (greenfield)
Goal: accelerate time-to-value with a simple, modular stack.
Deploy OIBus + OIAnalytics as the primary operational data backbone.
For process operation continuity, rely on existing SCADA/DCS buffering and redundancy.
Outcome: lower CAPEX, faster deployment, architecture that scales linearly with data volumes.
Scenario B — Sites with an existing historian (brownfield)
Goal: modernize without disruption.
Keep the historian where it is deeply embedded in real-time operations.
Use OIBus as a parallel interface to replicate/mirror the relevant data flows into OIAnalytics. Route non-critical or secondary signals directly via OIBus to OIAnalytics.
Reserve the historian for engineering/specialist use cases; expand OIAnalytics access to broader operational users.
Outcome: faster rollout (tagging already exists), reduced dependence on historian licenses for non-specialists, and a gradual path to optional future replacement.
Financial benefits
Higher business value
A historian investment primarily optimizes the storage and retrieval of tags, and often confines value to a narrow set of specialist users and trend-centric workflows. By contrast, a modern architecture centered on contextualization and operational intelligence turns the same raw data into reusable, business-ready data products. That makes it economically viable to scale a broad set of high-value use cases.
The financial impact typically comes from faster time-to-insight, higher adoption beyond engineering teams, and reduced “shadow IT” (spreadsheets, custom scripts, and point solutions) that accumulate hidden cost over time.
Lower on-prem infrastructure cost with a hybrid edge + cloud model
A hybrid approach separates edge connectivity from compute-intensive analytics and storage, changing the deployment cost structure.
With OIBus handling the on-site “connectivity function” (collecting, buffering, and securely forwarding data), the plant no longer needs to provision large on-prem servers primarily for historian-style storage and visualization. Instead, compute- and storage-intensive capabilities can be centralized in OIAnalytics in the cloud, leveraging scalable technologies.
This typically reduces local cost drivers:
Hardware footprint: smaller on-prem servers or virtual machines, typically limited to connectivity and buffering.
Maintenance effort: fewer local components to patch, back up, and upgrade. OIBus maintenance is simplified through centralized management from the OIAnalytics platform.
Supervision and operations: simplified monitoring because the edge layer is lightweight and purpose-built, while platform operations are handled centrally.
Lifecycle cost: less frequent hardware refreshes, and fewer site-by-site rebuilds when expanding to new use cases.
In practice, this shifts spend from plant-by-plant infrastructure and specialist tooling to a shared platform that can be deployed once and scaled across sites, improving total cost of ownership while expanding the range of value-generating applications.
Conclusion
Industrial data value is no longer created primarily by storing tags. It is created by making data usable through context, standardization, and analytics.
By combining OIBus (edge connectivity) with OIAnalytics (contextualization and self-service operational intelligence), organizations can modernize historian-centric architectures through a phased approach that preserves operational continuity while unlocking value quickly.
Modern approach vs. historian-centric: summary table
Axis
Historian-centric approach
Modern hybrid approach (OIBus + OIAnalytics)
Why it matters
Storage & scalability
Optimized for legacy constraints (proprietary compression, on-prem sizing)
Leverages modern TSDB/cloud scaling and cost-efficient infrastructure
Shifts differentiation from storing tags to making data usable faster
Context & semantics
Basic asset frameworks, often limited and inconsistent across sites
Dedicated rich contextualization layer with a consistent semantic model (assets, traceability, genealogy, events…)
Enables reliable KPIs and reusable data products across the organization
Use cases & analytics
Strong for trending and specialist workflows
Built for use cases (investigations, alerts, dashboards, golden batch, anomaly detection, optimization…)
Moves from “viewing data” to operational decision support and improvement
Self-service for operations
Access and value often limited to OT/IT and advanced users
Self-service dashboards and configurable no-code business apps for broader teams
Improves adoption and speeds time-to-insight
Cost & TCO
Plant-by-plant infrastructure and specialist tooling, scaling costs with sites. Mutliple tier deployement for corporate approach.
Hybrid edge + cloud centralizes heavy compute/storage; lighter on-site footprint
Lower maintenance burden and improved economies of scale across sites
Business value & ROI
Optimizes tag storage/retrieval, often with narrow value capture
Turns raw data into trusted, reusable, business-ready data products
Enables more high-value use cases and reduces shadow IT