In this regular column, we’ll bring you all the latest industry news centered around our main topics of focus: big data, data science, machine learning, AI, and deep learning. Our industry is constantly accelerating with new products and services being announced everyday. Fortunately, we’re in close touch with vendors from this vast ecosystem, so we’re in a unique position to inform you about all that’s new and exciting. Our massive industry database is growing all the time so stay tuned for the latest news items describing technology that may make you and your organization more competitive.
Tecton Releases Low-latency Streaming Pipelines for Machine Learning
Tecton, the enterprise feature store company, announced that it has added low-latency streaming pipelines to its feature store so that organizations can quickly and reliably build real-time ML models. Real-time ML means that predictions are generated online, at low latency, using an organization’s real-time data; any updates in the data sources are reflected in real-time in the model’s predictions. Real-time ML is valuable for any use case that is sensitive to the freshness of the predictions, such as fraud detection, product recommendations and pricing use cases.
“Enterprises are increasingly deploying real-time ML to support new customer-facing applications and to automate business processes,” said Kevin Stumpf, co-founder and CTO of Tecton. “The addition of low-latency streaming pipelines to the Tecton feature store enables our customers to build real-time ML applications faster, and with more accurate predictions.”
Kyligence Expands Its AI-Augmented Intelligent Data Cloud to Identify, Manage and Accelerate Insights for the Most Valuable Enterprise Data Assets
Kyligence, originator of Apache Kylin and developer of the AI-augmented data services and management platform Kyligence Cloud, announced Kyligence Cloud 4.5, a self-tuning analytics platform that powers interactive data applications, dashboards, ad-hoc analytics, and real-time streaming data.
“We started 2021 with the release of the first ever intelligent data cloud,” said Luke Han, co-founder and CEO, Kyligence. “With Kyligence Cloud 4.5, we have expanded our scope to include faster ad hoc and streaming analytics. This will enable data teams to provide optimal performance across three distinct analytics approaches with a single product. All this is accomplished using the same AI-augmented engine and governed by the same semantic layer.”
TruEra Open Sources TruLens, Neural Network Explainability for ML Models
TruEra, which provides the first suite of AI Quality solutions announced the availability of TruLens, an open source explainability software tool for machine learning models that are based on neural networks. TruLens is the only library for deep neural networks that provides a uniform API for explaining Tensorflow, Pytorch, and Keras models. The software is freely available for download, and comes with documentation and a developer community to further its development and use.
“TruLens reflects the over eight years of explainability research that this team has developed both at Carnegie Mellon University and at TruEra,” said Anupam Datta, co-founder, President, and Chief Scientist, TruEra. “This means that it starts as a robust, targeted solution with a strong lineage. There is also a team of deeply knowledgeable people standing by to help out developers as they explore the use of TruLens. We are looking forward to building an active developer community around TruLens.”
KX Insights Adds Google BigQuery Integration
KX announced that it has added Google BigQuery integration to KX Insights™, its cloud-first real-time streaming analytics platform. Customers can now gain powerful real-time insights from streaming data through KX, enriched with context from historical data sitting in Google Cloud’s highly scalable multi-cloud data warehouse. KX Insights is natively available on Google Cloud Marketplace as a certified solution. KX Insights takes full advantage of the Google Cloud platform to deliver quality performance while maintaining interoperability with existing processes and data.
“KX Insights has been built for performance, efficiency and ease of use,” says Paul Hollway, Head of Partnerships at KX. “With BigQuery integration, we’re giving customers even more choice in how they bring together real-time insights with historic data for smarter, faster business decision making.”
Provectus Releases Open Data Discovery Platform To Democratize Data Observability and Reliability
Provectus, a Silicon Valley artificial intelligence (AI) consultancy, announced the release of Open Data Discovery (ODD) Platform, a free open-source data discovery and observability tool for data-driven organizations looking to democratize their data by making it more discoverable, manageable, observable, reliable, and secure.
ODD Platform is a next-generation solution for data observability and reliability. It enables enterprises to streamline all data handling processes, to facilitate collaboration between data scientists and data engineers, reduce data discovery time, and minimize data downtime. It offers engineers an easy-to-use environment where they can manage data by using a variety of built-in tools, to make their data entities more reliable, observable, and easily discoverable. The ODD Platform offers IT organizations a robust solution for the understanding and management of their data.
“We are excited to release ODD Platform and offer the engineering community a powerful tool to democratize data at scale,” says Stepan Pushkarev, CTO of Provectus. “We hope OOD platform will be a viable alternative to siloed data catalogs, and that it will enable data science and data engineering teams to accelerate and facilitate data discovery, minimize data downtimes, and, most importantly, focus on building data products. Inefficient metadata exchanges between data tools is so mundane, it has become accepted as unavoidable, and we plan to change that.”
HSR.health Releases New GeoHealth Analytic Dashboard to Target and Eliminate Healthcare Inequities
HSR.health, a leading provider of health-focused geospatial data analytics, announced the release of the Health Equity Analytic Dashboard accessed within their GeoHealth Platform. The Dashboard reveals insights into the intersection of social, racial, and economic inequities, and enables public and private health decision makers to develop interventions for addressing and improving those conditions.
“Our Dashboard specifically identifies inequities and health outcomes in an area by the social determinants reflected by that population,” says Ajay Gupta, CEO of HSR.health. “Rather than generalizing and saying the outcomes are unequal, with our tool healthcare leaders can determine where, who, and what factors are affecting those outcomes, and make data-driven decisions to improve public health accordingly.”
Lightup Successfully Completes SOC 2 Type 2 Certification of Cloud Native Data Quality Solution
Lightup, developers of a breakthrough data quality monitoring solution, announced it has successfully completed the Service Organization Control (SOC) 2 Type 2 certification, with independent attestation from Linford. In just one year, Lightup has become the first cloud-native, data quality monitoring vendor to complete this comprehensive level of certification, which has become table stakes for demonstrating that a company can reliably and securely manage and store an enterprise’s critical data assets.
Lightup is an enterprise-grade data quality monitoring solution that can be up and running in minutes, providing organizations with an ideal solution for ensuring data quality for SQL data stores such as Snowflake and Databricks and streaming data sources including Kafka and Segment. Lightup continuously tracks the data going into and coming out of a software product to detect significant changes that are indicative of degradation in data quality.
“Security and privacy have been important to Lightup from day one and were built into our data quality monitoring solution and software development lifecycle from the ground up,” said Manu Bansal, co-founder, and CEO of Lightup. “Achieving SOC 2 Type 2 compliance demonstrates to our customers that we meet the security and compliance requirements of the most demanding enterprise IT departments and can optimally manage their data – the crown jewels that must be given the highest level of protection.”
Lightrun Launches Support for Python, Giving Developers a Simpler Way to Debug Live Machine Learning Pipelines
Lightrun, a leader in IDE-native observability, announced support for the Python programming language and its ecosystem of deep learning and data science libraries. With support for Python (the second most popular programming language, according to analyst firm Redmonk’s 2021 Programming Language Rankings), Lightrun brings “shift-left” observability to the algorithmically complex data science realm, where troubleshooting production code is essential to troubleshooting machine learning pipelines.
“Discovering the root cause of failures in a machine learning pipeline is challenging because problems can come from many different sources, including bugs in the code, input data, and improper parameter settings,” said Leonid Blouvshtein, co-founder and CTO at Lightrun. “With Lightrun’s support for Python and related frameworks, now developers can have live probes into their algorithms and models and observe what’s happening in production code, all without leaving the IDE. This is a dramatically more efficient way to evolve machine learning pipelines over previous approaches.”
LumenVox Launches Next-Generation Automatic Speech Recognition Engine with Transcription
LumenVox, a leading provider of speech and voice technology, announced its next-generation Automatic Speech Recognition (ASR) engine with transcription. The new engine, built on a foundation of artificial intelligence (AI) and deep machine learning (ML), outpaces its competition in delivering the most accurate speech-enabled customer experiences.
The new LumenVox ASR engine stands apart from the rest with its end-to-end Deep Neural Network (DNN) architecture and its state-of-the-art speech recognition processing capabilities. The new ASR engine not only accelerates the ability to add new languages and dialects but also provides a modern toolset to expand the language model to serve a more diverse base of users.
“Companies are evolving rapidly and seeking more voice enabled applications to deliver powerful customer experiences,” said Joe Hagan, chief product officer at LumenVox. “The new LumenVox ASR engine, enables customers to meet the increased need for next-generation speech recognition services by offering new languages and dialects without the high cost of professional services.”
GenesisAI: A platform to Build a Network of Machine Learning Models
Machine learning technology has been on the rise in the last decade. The swift development of AI has also led it to be increasingly integrated into businesses and various processes to enable efficiency. However, this technology is still costly, and only those with enormous resources can afford to use them. Moreover, there is still a significant hurdle that stands in the way of developing such technologies. It’s the lack of connectivity and communication between these businesses that causes a slowdown.
At present, AI technologies operate in a closed environment, and there is no channel for information sharing between AI products. The lack of such data impedes the ability and scope to learn from one another and improve the existing technology. Fortunately, GenesisAI is a web platform that enables different AIs to communicate with each other, exchange data, and trade services. Furthermore, it aims to help the entire AI industry overcome obstacles and make AI accessible and affordable for all.
“We would like to lay a foundation for the creation of Artificial General Intelligence and smash the current oligopolistic system of a few large companies basically owning all the AI.” – Archil Cheishvili, Co-Founder, GenesisAI.
Quantum Computing Inc. Announces QUBT University
Quantum Computing Inc. (Nasdaq: QUBT) announced its QUBT University program (QUBT U). The program will empower qualified students to get hands-on experience with quantum computing and quantum-ready algorithms like QUBO and QAOA by providing access to QCI’s flagship product Qatalyst™, a ready-to-run software for solving complex optimization problems on both classical and quantum computers, as well as quantum educational resources. Students can solve their first quantum-ready problem within a few days versus the many months it might otherwise take to code the same quantum problem as a quantum program.
The Quantum Club of Notre Dame University will be the first student participants in QUBT University. They will solve three complex problems, each with increasing difficulty, as part of the initial Qatalyst work. Their experiences and feedback will be instrumental in expanding and evolving QUBT U to advance education in the field.
“The Quantum Computing Club at the University of Notre Dame is super excited to get involved with QCI’s QUBT University program to explore the power of quantum computing,” said Robert Koniuta, Founder of The Quantum Club. “QUBT University offers us the chance to get hands-on experience with quantum computers including D-Wave, IonQ and Rigetti, using Qatalyst ready-to-run software for quantum inspired classical and quantum computing. Let the quantum challenge begin!”
Satori Announces Data Security Policy Engine to Streamline and Revolutionize Data Security for Large Enterprises
Satori, a leading DataSecOps platform, announced the Satori Data Security Policy Engine to streamline and revolutionize data security for large enterprises. This new extension of Satori’s DataSecOps platform enables companies to democratize data access and modernize operations for dynamic enterprise data environments using scalable, universal and holistic data security policies.
“Implementing data security controls for specific users and groups is so complex that companies with a vision of enabling broad and fast access to data get stuck in a much different reality,” said Yoav Cohen, CTO and co-founder of Satori. “Access to data gets bogged down because engineering queues are filled with tedious tactical operational and security tasks for building database views, maintaining mapping tables and developing functions to keep data entitlements under control. Our Data Security Policy Engine solves this problem by democratizing self-service data access to user groups while still maintaining security and access policies. The result is fast speed-to-data access across an enterprise.”
Talend Announces Latest Innovations to Support Journey to Healthier Data
Talend, a global leader in data integration and integrity, announced the latest innovations added to Talend Data Fabric, a complete integration and governance platform designed to manage the health of corporate information. Available now, new enhancements from Talend provide data professionals with new, high-performance integrations to leading cloud intelligence platforms, a self-service API portal, collaborative data governance capabilities, and private connections between Amazon AWS and Microsoft Azure to ensure data security.
“One of the biggest crises businesses face today is a lack of agility caused by untimely, inaccessible, incomplete, and inaccurate data,” said Krishna Tammana, CTO, Talend. “With Talend’s latest release, we’re helping data professionals connect, share, and improve their data faster and more securely. These innovations represent one step in our ongoing journey to help our customers put healthier data at the center of their businesses.”
Zaloni Launches Unified Data Governance SaaS Offering on AWS
ZaloniTM, a leader in data management and DataOps, announced the release of a new fully-managed, software as a service (SaaS) offering in its Concourse level package of the Zaloni Arena DataOps platform. The offering brings enhanced business-focused data cataloging and data governance capabilities, purpose-built and optimized specifically for Amazon Web Services (AWS).
The Arena Concourse package provides visibility and control across AWS accounts, verifying data quality, protecting personally identifiable information (PII) and sensitive attributes, ensuring security and compliance to drive governance initiatives, and delivering trusted data to consumers in a self-serve data marketplace. The package will be available exclusively in the AWS Marketplace.
“Organizations today face challenges transitioning their data governance practice to the cloud due to complex data architectures and difficulty standardizing governance policies across cloud and on-premise environments,” said Ashwin Nayak, Vice President of Engineering, Zaloni. “Having a robust data governance framework and a cloud-native DataOps platform is key to effectively deliver trusted data to consumers across the enterprise while ensuring compliance with regulatory requirements. The Zaloni Arena platform solves this problem through native integration with AWS Glue, Lake formation, Amazon EMR, Amazon Athena and Amazon Redshift. ”
Spell Operationalizes Advanced AI with the First Comprehensive MLOps Platform for Deep Learning
Spell, a leader in operationalizing AI for natural language processing (NLP), machine vision, and speech recognition, has launched the world’s first cloud-agnostic, end-to-end MLOps platform for deep learning. The namesake solution — developed by AI industry veterans — tracks, manages, and automates the entire deep learning workflow, from developing and training, to deploying and optimizing models at scale. The platform dramatically improves efficiency, effectiveness, and compliance for AI projects in large enterprises and AI startups alike.
“The rapid growth of the Spell user community is a gratifying validation of our vision for democratizing deep learning through a comprehensive, transparent platform for enabling and accelerating the successful adoption of advanced AI across a broad range of industries and use cases,” said Spell’s CEO and co-founder, Serkan Piantino, adding, “and we are just getting started.”
Wavicle Data Solutions Introduces Augment™, Its New Machine Learning-Powered Augmented Data Management Platform
Chicago-based Wavicle Data Solutions, a leading data analytics firm offering cloud migration services and data management consulting, announced the launch of Augment™, the company’s new data management platform for achieving clean, compliant data as quickly as possible. Augment was specifically developed to address the ongoing challenges Wavicle teams experienced with their client’s data analytics and data management projects and the gaps that lead to expensive developments and delayed data delivery.
“As a data analytics firm, we’ve seen hundreds of cloud migration and data management projects and over and over again, we see the same challenges emerge with most of our clients – particularly around data quality and compliance,” stated Naveen Venkatapathi, president of Wavicle Data Solutions. “By automating some of the most common and critical integration processes that directly impact data quality and privacy, we have been able to help our clients achieve analytics-ready data at a fraction of the time and cost.”
Pliops Launches Extreme Data Processor to Multiply Data-Intensive Application Performance and Slash Data Center Infrastructure Costs
The data center revolution is here, and it’s being delivered by a new processor that radically simplifies how data is processed and stored. In a move that sets a new benchmark for data center scaling and efficiency, data solutions innovator Pliops announced the launch and commercial availability of its breakthrough Extreme Data Processor (XDP). Taking the spirit of Moore’s Law to its next chapter, Pliops XDP exponentially increases performance, reliability, capacity, and efficiency – multiplying the effectiveness of data center infrastructure investments. Providing new levels of processing power and storage scalability for relational, NoSQL and in-memory databases, analytics, AI/ML, 5G, IoT, and other data-intensive applications and platforms, the XDP changes the landscape for what is possible.
“The world has been transformed by data, and Pliops is keeping that transformation going,” according to Uri Beitler, Pliops founder and CEO. “Data is one of the most powerful tools we have to make the world a better place for ourselves and our families,” Beitler continued. “However, the data needs of today and tomorrow are not compatible with the data center architecture of yesterday. As data grows exponentially, our ability to handle the data must grow too. What’s needed is a revolutionary data processor – one that doesn’t take a revolution to deploy at scale. That’s exactly what we have delivered with the XDP: a solution with the capacity to solve the data challenges of today and tomorrow and sustain the momentum of data-powered innovation.”
Moogsoft Drives Innovation Within the Observability and AIOps Market with New Efficiency, Collaboration and Automation Features
Moogsoft, an AIOps pioneer, and Observability leader, announced new product features and improvements to speed up incident response and collaboration by further enhancing Moogsoft’s integration with PagerDuty; automatically adding context to users’ troubleshooting by prioritizing and automating event workflows while connecting multiple data catalogs containing rich contextual information; increase availability and uptime by extending integrations to Splunk for events, and Telegraf and Prometheus for metrics and detecting anomalies in users’ AWS infrastructure; programmatically automate event and incident workflows and processes to reduce toil through all new API functionality; and achieve greater accuracy and flexibility through automation via selected tag propagation.
“As digital first continues to become the backbone for businesses, there is a defined need for innovation in the world of AIOps and Observability to assure a superior customer experience”, said Adam Frank, Moogsoft Vice President, Product Management and UX Design. “At Moogsoft, we are dedicated to leading that charge, further developing our platform to align with the needs of our current and future customers.”
Couchbase Fuses Strengths of Modern and Legacy Databases to Accelerate Enterprise Applications for Customers
Couchbase, Inc. (NASDAQ: BASE), provider of a leading modern database for enterprise applications, announced the general availability of Couchbase Server 7. This landmark release bridges the best aspects of relational databases like ACID transactions with the flexibility of a modern database, allowing enterprises to confidently accelerate strategic initiatives such as more quickly moving business-critical applications into the cloud, improving application flexibility and increasing developer agility. With Couchbase Server 7, enterprise development teams get one unified platform and no longer need to use one database for transactions and a separate database for developer agility and scale. This means that customers can simplify their database architectures, expand Couchbase usage into enterprise transactional applications and reduce operating costs through performance enhancements.
“With Couchbase Server 7, the relational versus NoSQL database debate is over. Modern developers no longer have to struggle with having multiple databases– a relational database for transactionality, and a NoSQL database for flexibility and scale. We are delighted to be the first modern Database-as-a-Service provider to combine traditional relational database functionality like SQL and transactions with the flexibility and scalability of NoSQL. The data containment model and distributed SQL transactions introduced in Couchbase Server 7.0 give developers a familiar programming model on a distributed database. In addition, there are 30 other innovations covering query, search, eventing, analytics and geo-replication. No other database has organically fused all of these capabilities in a single database. These innovations give developers an astonishing advantage to build modern enterprise applications for a connected world.” – Ravi Mayuram, senior vice president of engineering and CTO, Couchbase
AtScale AI-Link Connects Business Intelligence and Enterprise AI with Semantic Layer to Scale Augmented Analytics and Data Science
AtScale, a leading provider of semantic layer solutions for modern business intelligence and data science teams, announced the availability of AtScale AI-LinkTM. AI-Link provides a python interface to AtScale, rich with business context and metrics, to connect data science and augmented analytics programs with enterprise business intelligence (BI). The AtScale semantic layer delivers the governance, consistency, and compliance needed to scale enterprise BI and artificial intelligence (AI) while accelerating live connections to public and private cloud data.
AtScale’s semantic layer insulates data consumers from the complexity of raw data, with business-oriented data models connected to live cloud data platforms including Snowflake, AWS, Microsoft Azure, Google Cloud, and Databricks. Hundreds of forward-thinking data teams use AtScale to let BI teams consume live cloud data with the tools of their choice, including Tableau, Power BI, and Excel. With AI-Link, data scientists can use Python to access the same governed source of enterprise metrics.
“Giving line-of-business users and executives the ability to access, analyze and act on machine learning predictions and augmented analytics is the real value of enterprise AI,” said Christopher Lynch, Executive Chairman and CEO of AtScale. “We’re seeing more and more organizations embrace the convergence of artificial intelligence and business intelligence as a fundamental component of their digital transformation.”
Unbabel Launches MT-Telescope to Deeply Understand Machine Translation Performance
Unbabel, an AI-powered Language Operations platform that helps businesses deliver multilingual support at scale, announced the launch of MT-Telescope – a new tool that enables developers and users of Machine Translation (MT) systems to deeply analyze and understand MT quality performance. Building on Unbabel’s automated quality measurement framework COMET, MT-Telescope is an open source tool that for the first time lifts the hood on MT quality analysis and provides unique granularity and quantitative insights into the quality performance of MT systems.
“At Unbabel, we constantly work on developing, training, maintaining, and deploying MT systems at a rapid pace and to high quality standards. This challenging need drives our research and development objectives, especially in the domain of quality analysis and evaluation,” said Alon Lavie, VP of Language Technologies at Unbabel. “MT-Telescope helps our LangOps specialists and development teams make smarter decisions for customers about which MT system better suits their needs, and enables the MT research community to easily use best practice analysis methods and tools to rigorously benchmark their advances.”
Trifacta Delivers Head Start on Cloud Data Engineering With New Template Gallery
Trifacta, the Data Engineering Cloud company, announced the launch and general availability of the industry’s first pre-built cloud data engineering templates, furthering the acceleration of self-service modern data management. Trifacta’s data templates gallery gives users a head start by allowing them to deploy data engineering workflows in minutes that can be tailored to solve specific data challenges in common use cases. The data templates enable and encourage sharing and collaboration among modern data workers in any line of business, ultimately empowering companies to better use data to increase productivity, improve sales, bolster marketing initiatives, and more.
“The pre-built templates from Trifacta allow companies to start from a complete workflow and quickly tailor it to their specific needs, dramatically reducing the cycle time to go from raw data to high quality, automated data pipelines,” said Sean Kandel, Co-Founder and CTO of Trifacta. “As more companies modernize and migrate their data to the cloud, the opportunity to become truly data-driven is there for the taking.”
AI Startup KaJ Labs set to Launch Blockchain Rival to Ethereum
Joel Kasr, founder of KaJ Labs, announced the upcoming launch of Lithosphere (LITHO), a unique new AI blockchain platform. Kasr is the creator of Lithosphere, the first blockchain to utilize Deep Neural Networks (DNN) in smart contracts. Litho is the native token of the Lithosphere blockchain and the private token sale for $LITHO is now open. KaJ Labs also announced its transition from a for-profit organization to the non-profit KaJ Labs Foundation.
Lithosphere marks a new era in blockchain technology. Its launch is the first time that Deep Learning will be used in contracts through embedded DNNs in the code. Lithosphere is based on diverse tokens and will connect multiple types of value transfer methods under a single management structure. Lithosphere will implement a novel Myriad Distribution Key Management (MDKM) to enhance security.
The release of Lithosphere introduces a variety of innovations, including a new token standard (LEP100) and novel consensus algorithm (LinBFT). The ultimate goal for KaJ Labs Foundation is for Lithosphere to connect all past and future blockchains, break down monopoly barriers, and create a network of blockchains that can communicate with each other in a decentralized manner.
Datadobi Announces Vendor-Neutral and Scalable Unstructured Data Mobility Engine
Datadobi, a leader in unstructured data management software, announced the launch of a vendor-neutral unstructured data mobility engine designed to handle the scale and complexity of the world’s largest storage environments. Version 5.12 (v5.12) unifies the most sophisticated and field-hardened capabilities and techniques to deliver the data mobility required for sophisticated, vendor-neutral, and scalable unstructured data management solutions.
“The scale and complexity of unstructured data in today’s heterogeneous storage environments have proven to be quite a challenge for organizations — and for good reason. Managing unstructured data, whether it be to reduce risk and cost or to make better use of the data, is a specialist activity. Simply taking existing structured data management applications and trying to make them run on vast amounts of unstructured data is a fool’s errand,” said Carl D’Halluin, CTO, Datadobi, “Datadobi’s engine enables customers and partners alike to gain control of and utilize their unstructured data across environments on-premises and in the cloud. It gives customers the complete solution they are looking for in combination with their storage vendors, cloud providers, and application partners.”
Varada Ships Version 3.0, Adding Elastic Scaling to the Power of Indexing for Big Data Analytics, Extending TCO and Performance Advantages
Varada, the data lake query acceleration innovator, unveiled version 3.0 of its data analytics platform, now delivering a powerful and cost-effective alternative to offerings like Snowflake, Redshift, Athena, Preso, Trino and BigQuery for at-scale big data analytics users who rely on the power of indexing to extract insights from massive, unstructured data sets.
The new version marries the power of cloud elasticity and the query power of indexing for big data analytics, giving data teams the ability to scale analytics workloads rapidly and meet fluctuating demand. It delivers a dramatic increase in cost performance and cluster elasticity as compared to the previous version. In addition, version 3.0 eliminates the need to keep high-performance and expensive SSD NVMe (Solid-State Drive Nonvolatile Memory Express) compute instances idling when the cluster is not in use.
“Varada was built on the premise that indexing can transform big data analytics, if done correctly,” said Eran Vanounou, CEO of Varada. “With version 3.0, the Varada platform is now the most powerful and cost-effective way to leverage the power of big data directly atop of your data lake. Query acceleration optimizations are time consuming to create, including indexing. So, we want to ensure that the platform operates autonomously, including quickly reacting to changing demand. V 3.0 introduces a new layer to Varada’s platform. We’ve separated the index and data from the SSD nodes, creating a ‘warm’ tier in the data lake that allows us to preserve those indexes much faster and at a much lower cost. By doing so we’re bringing the power of cloud computing scaling to big data indexing.”
Sign up for the free insideBIGDATA newsletter.
Join us on Twitter: @InsideBigData1 – https://twitter.com/InsideBigData1