Key concepts


    participant User
    participant Application

    User->>Application: Load Dataset
    loop Task Execution
        User->>Application: Execute Task
        Application->>Application: Modify State
    User->>Application: Save Result

Users can load datasets of any supported file type and apply multiple tasks to manipulate the data. Tasks can operate on selected columns, allowing users to specify input columns and choose output columns for task results. For instance, the “Content/Hash” task takes a user-selected column as input and generates a hash value based on its values, offering a flexible and customizable approach to data transformation and analysis. Read more about tasks

Workflow Concept operates on the principle of a streamlined, task-based workflow reminiscent of the Linux pipe paradigm. This approach allows users to efficiently manipulate and process data by applying individual tasks successively, with the output of each task serving as the input for the subsequent one.

Key Elements

Single Task Execution: Users interact with the application by executing single tasks at a time. Each task is designed to perform a specific operation on the dataset.

State Management

The application dynamically manages the state of the dataset as tasks are executed. The output generated by each task modifies the current state, which becomes the input for the next task in the pipeline.

File-As-Database Concept adopts a unique approach where any file can be treated as a database. Users can load various file formats such as plain text, CSV, TSV, Parquet, and JSON directly into the application. Once loaded, these files serve as the initial dataset, which can be modified and updated iteratively through task execution.

Incremental Updates

Tasks applied to the dataset result in incremental updates to its structure and content. Users can iteratively refine and transform the dataset by chaining together multiple tasks, each building upon the changes made by the preceding ones.