Making sense of data with low-code environments – VentureBeat

With data continuing to grow, organizations are increasingly centralizing their data activities. However, when it comes to tools, the landscape is still highly fragmented. Most business analysts are limited to classic spreadsheets and BI tools for static data manipulation and exploration, while data scientists hand-code predictive models in a variety of different languages and rely on IT for deployment. Meanwhile, the data engineers provide everyone access to appropriate data aggregates that are extracted from a multitude of data sources, on-prem and in the cloud.

An effective low-code environment enables those people to work together more productively and, at the same time, provides a single platform suitable for those diverse audiences. The business users can focus on aggregating and exploring data, the data scientists can apply sophisticated machine learning (ML) and artificial intelligence (AI) methods, and the data engineers can make sure that data manipulations are run in the right environment and follow company compliance rules. The right low-code environment essentially serves as a no-code environment for some of these users and as a visual programming environment for others, who are building more complex solutions.

Additionally, together with IT, the team sets up the appropriate productization protocols, so what they created can be continuously deployed into production — as an interactive application, on the edge, or simply automated for regular execution. The right environment also makes the compliance department happy.

Let’s look at the different stakeholders and how they benefit from a low-code environment.

No code for business experts

Business analysts need automatically generated summaries and visualizations of data overviews to quickly identify changes in trends (or simply to save time on auditing and regular reporting). Also, they benefit from the ability to look at their data from various angles, often leading to new insights into ongoing operations.

The appropriate low-code environment makes these tasks easier than writing Excel macros and less limiting than what data aggregation can be done inside a standard BI tool. Without ever touching code, a business user can model a data flow directly and intuitively. This “no-code” use case has the added side effect that that process is properly documented and can be explained (or handed over) to others easily.

Using the right environment, the door is not closed to automated, well-documented data aggregations and visualizations. Now that our business experts have more time on their hands, they can start exploring more data, as well as other techniques. Gradually, they’ll learn more about modern data science and continuously increase their repertoire of methods that help them make sense of their data. The right environment opens the door to taking steps toward becoming a data scientist, and since their data science colleagues are already using the same environment, they can directly benefit from examples and blueprints created by them.

Low code for data engineers

Being able to quickly generate and hand over different views on, ideally, all of the corporate data repositories is still one of the biggest hurdles to making sense of all of that data. We can continue to wait for the corporate-wide, well-organized and always up-to-date data warehouses to finally show up, or we can continue to rely on a team of data engineers quickly responding and providing the right view. Neither is very efficient or likely.

A low-code environment provides the data experts with the ability to create those data views virtually, on the fly, and hand them over to their users. The data engineers can design internal data sources that comply with governance rules, while their users leverage the same environment to customize the data view further to meet their needs.

Done right, the data engineers can even switch from one data source (e.g., their current cloud storage provider) to another one — or add yet another new source to the mix — without their users needing to worry. All they see is the same view on the data, and their low-code solutions continue to work without a glitch. That way, the data engineers are continuously deploying new and updated views on their virtual data warehouse. And, again, the low-code environment automatically documents every step taken along the way.

In the end, what the data engineers are doing is visual programming of mostly SQL. If they want to, they can reach out and provide actual code snippets as well, but in a well-designed low-code environment, this will rarely be needed. And if it is, it will become encapsulated in the low-code flow and be governed and documented just like the rest.

Low code for data scientists

Picking from the wealth of established data science techniques, trying out bleeding edge algorithms and automating select pieces of the model optimization and/or feature engineering process is still hard to do in a way that makes the results easy to deploy. Many environments are way too complex or too simplified or, most tragically, fail to cover the depth of what a data science team wants to have access to. A data scientist wants precise control over all of the little knobs and dials of a learning algorithm, and they want choice — the ability to pick from a wide repertoire of techniques.

A serious low-code environment provides data scientists flexibility around the tools they use. At the same time, it allows focus on the interesting parts of their job, while abstracting away from tool interfacing and different versions of involved libraries. A good environment lets data scientists reach out to code if they want to, but ensures they do not have to touch code every time they want to control the interna of an algorithm. Essentially, this allows visual programming of a data flow process — data science done for real is complex, after all.

If done right, the low-code environment continues to allow access to new technologies, making it future proof for ongoing innovations in the field. But the best low-code environments also ensure backward compatibility and include a mechanism to easily package and deploy trained models together with all the necessary steps for data transformations into production.

No code for the CxO (and other business users)

The relationship between the data science department and their end users is typically strained. The business people often complain that the data folks work slowly, don’t quite understand the real problem and, at the end of it all, don’t quite arrive at the answer the business side was looking for. The data science team complains about how much explaining they have to do and how underappreciated their hard work is. Both sides are frustrated: the business didn’t get what they wanted, and the data science team doesn’t get the credit.

A low-code environment can help here as well; it allows the data science team to discuss with the business users how they are aiming to get to the answer in an intuitive and visual way. The business users will not need to understand all of the nitty-gritty details of how data is blended and which type of ML model is used to make the prediction, but they can understand the flow of the data and provide instant feedback on when they aren’t getting the answers they are seeking. For the data science team, the low-code environment allows much quicker turnaround as well; adjustments to the data flow are fast and easy.

The result is data science no longer done in isolation but rather in a collaborative effort, efficiently leveraging the expertise of both data and business experts. A proper low-code environment also allows them to deploy quickly into production and make the resulting API services or web applications as interactive as needed. Instead of building a dozen different versions of the application, they may decide to simply deploy one that allows a bit more interaction to address those dozen needs (and a dozen future ones).

No code for the CDO

Making sure all data is used properly to help speed up and improve operations everywhere in the organization is still extremely hard, which is why many organizations now have a central “data department.” But that doesn’t fix the problem; it just acknowledges that it exists and puts the responsibility to make things work onto someone’s shoulders.

The right low-code environment will remove a ton of friction from the way data is used inside an organization. First of all, the data experts can work together in one, collaborative environment. They don’t need to wait until all data and tools are integrated into one system but can blend data and tools when needed. Secondly, they can — together with the business users — design solutions for the actual end users and can easily and reliably move those solutions into production. And thirdly, the right low-code environment also allows an organization to ensure governance and auditability out of the box.

But looking into the future, there is more: Having built low-code workflows to solve specific problems, their inherent, built-in documentation makes it easy to use them as blueprints for future problems, so the team doesn’t always have to start from scratch. If the team takes the right, modular approach, they can readily build components that solve parts of a problem, such as establishing well-defined access to the organization’s data lakes and providing templates for standardized reports. And finally, if it’s an open environment, all of the new technologies that are currently being invented can still be used by the data science team. Adopting such an open low-code environment doesn’t come at the cost of keeping up with the latest and greatest technologies.

A low-code environment makes sense

A low-code environment is helpful for successfully making sense out of corporate data on an ongoing basis. It enables collaboration between all stakeholders, allows agile creation of new insights, data services and applications and it brings along inherent transparency that’s critical for governance.

An effective low-code environment also keeps everybody in the organization using modern technologies without the constant need for switching tools, as they serve as an abstraction layer away from the tool complexities. This is the most critical property: making sure an organization uses data technology that’s truly future-proof and lock-in free.

Michael Berthold is the cofounder and CEO of KNIME.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers

Spread the love

Leave a Reply

Your email address will not be published.