A new technology offers companies a single pipeline to manage and control their petabyte-scale data collection. This enables you to carry out reliable and cost-effective analyzes and automation.
Modern clouds generate an enormous amount and variety of data. Business stakeholders want more data-driven insights and automation to make better decisions, increase productivity and reduce costs. However, creating a unified and user-friendly environment for data analytics and automation is difficult due to the complexity and diversity of modern cloud architectures and the variety of monitoring and analysis tools in companies.
Data pipelines, analytics and automation require high security standards
Additionally, companies must ensure that their data pipelines, analytics and automation comply with security and privacy standards such as GDPR. Therefore, companies need visibility and control over their data pipelines, while controlling costs and maximizing the value of their existing data analytics and automation solutions.
Dynatrace OpenPipeline gives business, development, security and operations teams complete visibility and control over their data ingestion, while maintaining the context of the data and the cloud environments from which it comes.
The solution enables these teams to collect, converge, route, enrich, dedupe, filter, mask and transform observability, security and business event data from any source - including Dynatrace® OneAgent, Dynatrace APIs and OpenTelemetry - with customizable retention times for individual use cases.
This allows organizations to manage the ever-increasing volume and diversity of data from their hybrid and multi-cloud ecosystems and empower more teams to access the Dynatrace platform's AI-powered responses and automation without additional tools need.
Benefits of working with other core Dynatrace platform technologies
Dynatrace OpenPipeline works with other core technologies in the Dynatrace platform, including Grail Data Lakehouse, Smartscape topology, and Davis hypermodal AI. This offers the following advantages:
- Petabyte-scale data analysis: Leverages patent-pending stream processing algorithms to achieve dramatically increased data throughput at petabyte scale.
- Uniform data collection: This enables teams to collect observability, security, and business event data from any source and in any format, including Dynatrace OneAgent, Dynatrace APIs, open source frameworks such as OpenTelemetry, and other telemetry signals.
- Real-time data analysis during ingestion: This enables teams to convert unstructured data, such as logs, into structured and usable formats - such as converting raw data into time series, calculating metrics, or creating business events from log lines - right at the point of ingestion.
- Full data context: The context of heterogeneous data points - including metrics, traces, logs, behavior, business events, vulnerabilities, threats, lifecycle events and many others - is maintained and reflects the different parts of the cloud ecosystem from which they originate.
- Privacy and security controls: Users have control over what data they analyze, store, or exclude from analysis. The solution includes fully customizable security and privacy controls to meet customers' specific requirements and regulatory requirements, such as automatic and role-based hiding of personal data.
- Cost-effective data management: This helps teams avoid collecting duplicate data and reduce storage requirements by transforming data into usable formats (e.g. from XML to JSON) and allowing teams to remove unnecessary fields without losing insights, context or Analysis flexibility is lost.
Five to ten times faster data processing
“OpenPipeline is a powerful addition to the Dynatrace platform,” said Bernd Greifeneder, CTO at Dynatrace. “It enriches, converges and contextualizes the heterogeneous observability, security and business data coming from the clouds and provides unified analytics across that data and the services it represents. Like the Grail Data Lakehouse, we built OpenPipeline for petabyte-scale analytics. OpenPipeline works with Dynatrace's Davis hypermodal AI to extract meaningful insights from data, enabling robust analysis and reliable automation.
According to our internal testing, OpenPipeline powered by Davis AI enables our customers to process data five to ten times faster than comparable technologies. Bringing together and contextualizing data within Dynatrace makes it easier to comply with regulatory requirements and conduct audits, while giving more teams within organizations immediate visibility into the performance and security of their digital services.”
More at Dynatrace.com
About Dynatrace Dynatrace ensures that software works perfectly worldwide. Our unified software intelligence platform combines broad and deep observability and continuous run-time application security with the most advanced AIOps to deliver answers and intelligent automation from data at remarkable scale. This enables organizations to modernize and automate cloud operations, deliver software faster and more securely, and ensure flawless digital experiences.
Matching articles on the topic