Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Introduction

Welcome to the official documentation for Kinesis API, a powerful all-in-one framework that transforms how you create, manage, and scale APIs.

What is Kinesis API?

Kinesis API is a comprehensive solution for API development that combines:

  • A custom-built, high-performance database system (Kinesis DB)
  • A visual editor for creating API routes with the X Engine
  • Integrated management tools that eliminate the need for multiple external services

Whether you're prototyping a simple API or building complex, interconnected systems, Kinesis API provides the tools to accelerate development without sacrificing quality or control.

Key Features

  • All-in-one Platform: API creation, database management, and execution in a single unified environment
  • Visual Route Builder: Create complex API logic without writing traditional code using our block-based system
  • Custom Database: Built-in ACID-compliant database system with multiple storage engines and strong schema management
  • Performance-Focused: Developed in Rust for maximum efficiency and reliability
  • Flexible Deployment: Deploy anywhere with our Docker images
  • Comprehensive Management: User authentication, role-based access control, and extensive monitoring capabilities

Getting Started

New to Kinesis API? Start here:

  1. Installation Guide - Set up Kinesis API on your system
  2. Initialization - Configure your instance for first use
  3. API Tutorials - Build your first API with Kinesis API

Core Components

Dive deeper into the key technologies that power Kinesis API:

  • Kinesis DB - Our custom-built database system
  • X Engine - The visual API builder that makes complex logic accessible
  • API Reference - Complete reference for all available endpoints

Usage Guides

Learn how to use Kinesis API effectively:

Support & Community

If you need help or want to contribute:

  • Check our FAQ for common questions
  • Join our community discussions
  • Submit bug reports or feature requests
  • Follow our tutorials for practical examples

Further Steps

Once you're familiar with the basics:

Thank you for choosing Kinesis API. We're excited to see what you'll build!

Kinesis DB

Kinesis DB is a custom-built, ACID-compliant embedded database system written entirely in Rust. It forms the core data storage and management component of the Kinesis API platform, eliminating dependencies on external database systems while providing significant performance advantages.

Key Features

Multiple Storage Engines

Kinesis DB offers multiple storage engines to match your application's specific needs:

  • In-Memory Engine: Ultra-fast, volatile storage ideal for temporary data, caching, or situations where persistence isn't required. Provides maximum performance but data is lost on shutdown.

  • On-Disk Engine: Durable storage with ACID guarantees for critical data. Ensures data survives system restarts and power failures through persistent storage and write-ahead logging.

  • Hybrid Engine: Combines the speed of in-memory operations with the durability of on-disk storage. Uses intelligent caching with persistent backing for balanced performance, making it the recommended default choice for most applications.

Schema Management

Kinesis DB provides robust schema management capabilities:

  • Flexible Schema Definition: Create and modify schemas at runtime, allowing your data model to evolve with your application's needs.

  • Strong Type System: Supports various data types with strict validation to ensure data integrity.

  • Comprehensive Constraints:

    • Required fields (non-null constraints)
    • Unique constraints to prevent duplicate values
    • Default values for fields
    • Pattern matching for string fields using regular expressions
    • Min/max value constraints for numeric fields
    • Custom validation rules
  • Schema Versioning: Track and manage schema changes over time.

Transaction Support

Kinesis DB is designed with full ACID compliance:

  • Atomicity: All operations within a transaction either complete fully or have no effect.

  • Consistency: Transactions bring the database from one valid state to another, maintaining all defined constraints.

  • Isolation: Multiple isolation levels to control how concurrent transactions interact:

    • ReadUncommitted: Lowest isolation, allows dirty reads
    • ReadCommitted: Prevents dirty reads
    • RepeatableRead: Prevents non-repeatable reads
    • Serializable: Highest isolation, ensures transactions execute as if they were sequential
  • Durability: Once a transaction is committed, its changes are permanent, even in the event of system failure.

  • Deadlock Detection and Prevention: Automatic detection and resolution of transaction deadlocks.

  • Write-Ahead Logging (WAL): Ensures durability and supports crash recovery.

Performance Optimizations

Kinesis DB incorporates several performance optimizations:

  • Efficient Buffer Pool Management: Minimizes disk I/O by caching frequently accessed data in memory.

  • Configurable Caching Strategies: Tune caching behavior to match your workload characteristics.

  • Automatic Blob Storage: Large string values are automatically managed for efficient storage and retrieval.

  • Asynchronous I/O Operations: Non-blocking I/O to maximize throughput.

  • Indexing: Supports various indexing strategies to speed up queries.

  • Query Optimization: Intelligent query planning and execution.

Query Interface

Kinesis DB provides an intuitive query interface:

  • SQL-Inspired Command Syntax: Familiar syntax for developers with SQL experience.

  • CRUD Operations: Comprehensive support for Create, Read, Update, and Delete operations.

  • Data Search Capabilities:

    • Equality matching
    • Range queries
    • Pattern matching with regular expressions
    • Full-text search capabilities
  • Multiple Output Formats:

    • Standard output
    • JSON formatting for API responses
    • Table format for human-readable outputs

Configuration

Kinesis DB can be configured through environment variables:

VariableDescriptionPossible ValuesDefault
DB_NAMEName of the database (affects file names)Any valid filenamemain_db
DB_STORAGE_ENGINESelect storage enginememory, disk, hybridhybrid
DB_ISOLATION_LEVELDefault transaction isolation levelread_uncommitted, read_committed, repeatable_read, serializableserializable
DB_BUFFER_POOL_SIZEConfigure the buffer pool sizeAny positive integer100
DB_AUTO_COMPACTEnable/disable automatic database compactiontrue, falsetrue
DB_RESTORE_POLICYControl how transactions are recovered after a crashdiscard, recover_pending, recover_allrecover_pending

Best Practices

Performance Optimization

  • Choose the appropriate storage engine for your use case
  • Tune the buffer pool size based on your memory availability and working set size
  • Use indexes for frequently queried fields
  • Batch related operations in transactions
  • Consider denormalizing data for read-heavy workloads

Data Integrity

  • Use constraints to enforce business rules at the database level
  • Always use transactions for related operations
  • Implement proper error handling for database operations
  • Regularly back up your data

Schema Design

  • Design schemas with future growth in mind
  • Use appropriate data types for each field
  • Consider query patterns when designing your schema
  • Use meaningful field names and conventions

Current Schema

Kinesis API Database Schema

The above diagram illustrates the current database schema used in Kinesis API. The schema represents the relationships between core components of the system, including users, configs, projects, collections, structures and data objects. This schema is implemented directly in Kinesis DB, leveraging the type system and constraints described earlier to ensure data integrity across all operations.

Conclusion

Kinesis DB provides a powerful, embedded database solution that combines performance, reliability, and ease of use. By integrating directly with Kinesis API, it eliminates the need for external database dependencies while providing all the features expected of a modern database system.

X Engine

The X Engine is the core system that powers Kinesis API's visual API development capabilities. It allows developers to design, implement, and deploy complex API routes using a block-based visual system, dramatically reducing the learning curve and development time typically associated with API creation.

Overview

At its essence, the X Engine is an execution framework that transforms visual blocks into functional API endpoints. By abstracting away much of the underlying complexity, it enables developers to focus on business logic rather than implementation details. The X Engine's modular architecture makes it both powerful for experienced developers and accessible to those with less backend expertise.

Core Components

The X Engine consists of four primary component types that work together to handle API requests:

Processors

Processors control the execution flow of your API logic:

  • Function: Handle the flow of execution within an API route
  • Examples:
    • Loop: Iterate through collections of data
    • If/Else: Implement conditional logic
    • Try/Catch: Handle errors gracefully
    • Return: Send a response back to the client
    • Break: Exit from a loop
    • Fail: Trigger an error state with a specific message

Processors are the backbone of API route logic, allowing you to implement complex algorithms and workflows through visual components.

Resolvers

Resolvers map requests to the appropriate data sources:

  • Function: Retrieve and manipulate data from various sources
  • Examples:
    • Table: Query the Kinesis DB for records
    • Request: Extract data from the incoming API request
    • Auth: Access authentication information
    • State: Manage state between route executions
    • Config: Retrieve configuration values
    • External: Connect to external APIs or services

Resolvers serve as the bridge between your API endpoints and the data they need to operate, whether from internal or external sources.

Convertors

Convertors transform data between different formats:

  • Function: Transform and validate data during processing
  • Examples:
    • String: Manipulate text data
    • Number: Perform mathematical operations
    • Boolean: Evaluate logical conditions
    • Array: Work with collections of items
    • Object: Manipulate structured data
    • JSON: Parse and stringify JSON data
    • Date: Handle date and time operations

Convertors ensure that data is in the correct format at each stage of processing, reducing errors and simplifying data manipulation.

Definitions

Definitions specify the expected structure of data:

  • Function: Define the expected result format for each block
  • Examples:
    • Schema: Define the structure of data objects
    • Response: Specify the format of API responses
    • Request: Describe expected request formats
    • Error: Define standardized error structures

Definitions act as contracts between different parts of your API, ensuring consistency and making your API more predictable and easier to use.

How It Works

The X Engine processes API requests through the following stages:

  1. Request Reception: When an API endpoint receives a request, the X Engine initializes the execution context.

  2. Block Execution: The engine processes each block in sequence according to the route's configuration, with processors controlling the flow.

  3. Data Retrieval: Resolvers fetch necessary data from appropriate sources.

  4. Data Transformation: Convertors format and transform data as needed.

  5. Validation: The engine validates data against definitions at various stages.

  6. Response Generation: Finally, a response is constructed and returned to the client.

This entire process is visually designed through the Kinesis API interface, allowing you to create sophisticated API logic without writing traditional code.

Block Connections

Blocks in the X Engine are connected through a visual interface that represents the flow of data and execution:

  • Inputs: Each block can accept inputs from other blocks or direct values
  • Outputs: Blocks produce outputs that can be used by subsequent blocks
  • Conditions: Control blocks (like If/Else) can have multiple output paths
  • Variables: Named references allow data to flow between different parts of your route

The visual representation makes it easy to understand complex flows and troubleshoot issues.

Visual Editor

The X Engine is integrated with a visual editor in the Kinesis API web interface, providing:

  • Drag-and-drop Interface: Easily add and arrange blocks
  • Real-time Validation: Immediate feedback on configuration issues
  • Testing Tools: Test your routes directly from the editor
  • Version History: Track changes to your routes over time
  • Visual Debugging: Follow execution flow with visual indicators

The editor makes creating complex API routes accessible to developers of all skill levels.

Advanced Features

The X Engine includes several advanced features, some of which are still in active development, for complex API development:

Authentication Integration

The X Engine seamlessly integrates with Kinesis API's authentication system:

  • Role-based Access: Control which users can access specific routes
  • JWT Validation: Automatically validate authentication tokens
  • Permission Checking: Enforce granular permissions within routes

Custom Functions

Extend the X Engine with custom functions:

  • Reusable Logic: Create custom blocks for frequently used operations
  • Library Integration: Wrap third-party libraries in custom blocks
  • Complex Algorithms: Implement specialized business logic as reusable components

Middleware Support

Apply consistent processing across multiple routes:

  • Pre-processing: Validate requests before main processing
  • Post-processing: Format responses consistently
  • Error Handling: Implement global error management

Versioning

Manage API changes over time:

  • Route Versioning: Maintain multiple versions of the same endpoint
  • Migration Paths: Provide smooth transitions between versions
  • Deprecation Management: Gracefully phase out older endpoints

Best Practices

Performance Optimization

  • Minimize database queries by combining resolvers where possible
  • Use caching for frequently accessed data
  • Process only the data you need using selective field retrieval

Security Considerations

  • Always validate user input through definitions
  • Implement proper authentication and authorization
  • Use parametrized queries to prevent injection attacks
  • Avoid exposing sensitive data in responses

Maintainability

  • Name blocks clearly to document their purpose
  • Group related functionality into logical sections
  • Comment complex logic for future reference
  • Use consistent patterns across similar routes

Testing

  • Test edge cases and error conditions
  • Validate response formats against your definitions
  • Check performance under various load conditions
  • Test with realistic data volumes
  • API Reference - Complete reference for all available API endpoints
  • Routes - How to create and manage API routes in Kinesis API
  • Playground - Interactive environment for testing your APIs

Conclusion

The X Engine represents a paradigm shift in API development, combining the power and flexibility of traditional programming with the accessibility and speed of visual development. By abstracting complex implementation details while maintaining full capability, it enables developers of all skill levels to create professional-grade APIs in a fraction of the time typically required.

Whether you're prototyping a simple API or building complex, interconnected systems, the X Engine provides the tools to accelerate development without sacrificing quality or control.

API Reference

The Kinesis API provides a comprehensive set of RESTful endpoints that allow you to interact with all aspects of the platform programmatically. This reference documentation will help you understand how to authenticate, make requests, and interpret responses when working with the API.

Accessing the API Documentation

Kinesis API includes interactive OpenAPI documentation that allows you to:

  • Browse all available endpoints
  • View request and response schemas
  • Test API calls directly from your browser
  • Understand authentication requirements

You can access this documentation at:

Authentication

Most API endpoints require authentication using one of the following methods:

Personal Access Tokens (PAT)

The recommended method for programmatic access is using Personal Access Tokens:

  1. Generate a token in the web interface under Personal Access Tokens or PATs
  2. Include the token in your requests using the Authorization header:
Authorization: Bearer your-token-here

Session-based Authentication

For web applications, you can use session-based authentication:

  1. Call the /user/login endpoint with valid credentials
  2. Store the returned token
  3. Include the token in subsequent requests

Common Request Patterns

Standard Request Format

Most endpoints follow this pattern:

  • GET endpoints accept query parameters
  • DELETE endpoints accept query parameters
  • POST endpoints accept JSON data in the request body
  • PATCH endpoints accept JSON data in the request body
  • All endpoints return JSON responses

Request Example

POST /user/login HTTP/1.1
Host: api.kinesis.world
Content-Type: application/json

{
  "auth_data": "john_doe",
  "password": "Test123*"
}

Response Example

{
  "status": 200,
  "message": "Login Successful!",
  "user": {
    "id": 1,
    "first_name": "John",
    "last_name": "Doe",
    "username": "john_doe",
    "email": "john_doe@example.com",
    "password": "",
    "role": "VIEWER",
    "reset_token": "",
    "bio": "",
    "profile_picture": "",
    "is_public": false,
    "links": []
  },
  "uid": 1,
  "jwt": "eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiIxIiwiZXhwIjoxNjcyNTI2NDAwfQ.example_token"
}

Response Structure

All API responses follow a consistent structure:

{
  "status": 200,
  "message": "Operation successful",
  "data": {
    // Optional: Operation-specific data
  }
}

Status Codes

The API uses standard HTTP status codes:

  • 2xx: Success
    • 200: OK
    • 201: Created
    • 204: No Content
  • 4xx: Client errors
    • 400: Bad Request
    • 401: Unauthorized
    • 403: Forbidden
    • 404: Not Found
    • 422: Unprocessable Entity
  • 5xx: Server errors
    • 500: Internal Server Error

Error Handling

Error responses follow the same structure and include error details in the message field itself:

{
  "status": 400,
  "message": "Error: Invalid input data"
}

Pagination

For endpoints that return collections of items, pagination is supported:

  • Use offset and limit query parameters to control pagination
  • Responses include amount for the total amount of items

Example:

GET /config/fetch/all?uid=0&offset=2&limit=10

Using the API with the X Engine

The X Engine's visual API builder generates endpoints that follow the same patterns and conventions as the core Kinesis API. When you publish a route in the X Engine, it becomes available as a standard RESTful endpoint that can be accessed using the same authentication mechanisms.

Demo

Kinesis API offers a live demo instance where you can explore the platform's features without setting up your own installation. This allows you to get hands-on experience with the system and follow along with our tutorials using a ready-to-use environment.

Demo Instance

The demo instance is available at:

https://demo.kinesis.world/web/

Important Notice

⚠️ Data Erasure Warning: All data on the demo instance is automatically erased at regular intervals (typically every 24 hours at midnight UTC+4). Do not use the demo instance for storing any important information or for production purposes.

Default Credentials

You can access the demo instance using the following default credentials:

RoleUsernamePassword
RootrootTest123*

What You Can Try

The demo instance is fully functional and provides access to all features of Kinesis API. Here are some activities you can explore:

  • Create and manage API routes using the X Engine
  • Define data structures and collections
  • Test API endpoints using the built-in playground
  • Upload and manage media files
  • Configure user settings and preferences

Following Tutorials

All tutorials in our documentation can be followed using the demo instance. When a tutorial refers to "your Kinesis API installation," you can use the demo instance instead.

Limitations

The demo instance has some limitations you should be aware of:

  • Some security settings (such as CORS policies) aren't properly configured, as the instance is intended for demonstration purposes only
  • Email functionality is disabled, preventing features like user registration and password reset from working completely
  • Media uploads are restricted to a maximum of 20MB per file to preserve server resources
  • The database may experience occasional performance throttling during periods of high user activity

Next Steps

Once you've explored the demo and are ready to set up your own instance:

  1. Follow the Installation Guide to install Kinesis API on your system
  2. Complete the Initialization process to set up your instance with default values
  3. Configure your new installation using the Setup Guide to customize it for your specific needs

Feedback

We welcome feedback on your experience with the demo instance. If you encounter any issues or have suggestions for improvements, please contact us through the contact page or by emailing support@kinesis.world. We also encourage you to create a new ticket in the issue tracker.

Getting Started with Kinesis API

Welcome to the Getting Started guide for Kinesis API. This section will walk you through the process of installing, initializing, and setting up your first Kinesis API instance.

Prefer video tutorials? You can follow along with our YouTube walkthrough of this same project.

What You'll Learn

In this section, you'll learn how to:

  • Install Kinesis API using Docker or Rust
  • Initialize your installation with default settings
  • Secure your instance with proper credentials
  • Understand the key components created during initialization
  • Configure your system for production use

Quick Start Overview

Getting Kinesis API up and running involves two main steps:

  1. Installation: Set up the Kinesis API software on your system
  2. Initialization: Configure the system with initial data and settings

If you're eager to start right away, follow these quick steps:

# Create necessary directories
mkdir -p data/ public/ translations/

# Create configuration file
echo "TMP_PASSWORD=yourSecurePassword" > .env
echo "API_URL=http://your-domain-or-ip:8080" >> .env

# Run the Docker container
docker run --name kinesis-api \
  -v $(pwd)/.env:/app/.env \
  -v $(pwd)/data:/app/data \
  -v $(pwd)/public:/app/public \
  -v $(pwd)/translations:/app/translations \
  -p 8080:8080 -d \
  --restart unless-stopped \
  edgeking8100/kinesis-api:latest

# Initialize the system
curl "http://your-domain-or-ip:8080/init?code=code"

Then access the web interface at http://your-domain-or-ip:8080/web and log in with:

  • Username: root
  • Password: Test123*

Remember to change the default password immediately!

Detailed Guides

For more detailed instructions, refer to these guides:

System Requirements

Before you begin, ensure your system meets these minimum requirements:

  • Memory: 128MB RAM minimum (512MB+ recommended for production)
  • CPU: 1 core minimum (2+ cores recommended)
  • Storage: 100MB for installation + additional space for your data
  • Operating System: Any OS that can run Docker or Rust
  • Network: Outbound internet access for installation

Next Steps After Installation

Once you've completed the installation and initialization process, you'll want to:

  1. Change the default password
  2. Configure your system settings
  3. Build a Simple Counter App

Let's get started with the Installation Guide.

Installation

This guide covers how to install and set up Kinesis API on your system. We offer two installation methods: Docker (recommended for most users) and direct Rust installation (useful for developers contributing to the project).

System Requirements

Before installing Kinesis API, ensure your system meets these minimum requirements:

  • Memory: 128MB RAM minimum (512MB+ recommended for production use)
  • CPU: 1 core minimum (2+ cores recommended for production)
  • Storage: 100MB for installation + additional space for your data
  • Operating System: Any OS that can run Docker or Rust (Linux, macOS, Windows)
  • Network: Outbound internet access for installation

Configuration Options

Before installing Kinesis API, you should decide on your configuration settings. These can be set through environment variables.

Essential Environment Variables

VariableDescriptionExample
TMP_PASSWORDTemporary password for setupStrongPassword123!
API_URLURL where Kinesis API will be accessedhttp://localhost:8080

Database Configuration

You can configure the database system through additional environment variables:

VariableDescriptionPossible ValuesDefault
DB_NAMEDatabase filenameAny valid filenamemain_db
DB_STORAGE_ENGINEStorage engine typememory, disk, hybridhybrid
DB_ISOLATION_LEVELTransaction isolation levelread_uncommitted, read_committed, repeatable_read, serializableserializable
DB_BUFFER_POOL_SIZEBuffer pool sizeAny positive integer100
DB_AUTO_COMPACTAutomatic compactiontrue, falsetrue
DB_RESTORE_POLICYRecovery policydiscard, recover_pending, recover_allrecover_pending

Important: Changing database-related environment variables after your initial setup may cause data access issues or corruption. It's best to decide on these settings before your first initialization and maintain them throughout the lifecycle of your installation.

Using Docker is the simplest and most reliable way to deploy Kinesis API.

Prerequisites

  1. Install Docker on your system
  2. Ensure you have permissions to create and manage Docker containers

Installation Steps

  1. Create necessary directories for persistent storage:
mkdir -p data/ public/ translations/
  1. Create a .env file with your configuration:
echo "TMP_PASSWORD=yourSecurePassword" > .env
echo "API_URL=http://your-domain-or-ip:8080" >> .env

Replace yourSecurePassword with a strong password and your-domain-or-ip with your server's domain or IP address.

  1. Add any additional configuration options from the Configuration Options section above to your .env file.

  2. Run the Docker container:

docker run --name kinesis-api \
  -v $(pwd)/.env:/app/.env \
  -v $(pwd)/data:/app/data \
  -v $(pwd)/public:/app/public \
  -v $(pwd)/translations:/app/translations \
  -p 8080:8080 -d \
  --restart unless-stopped \
  edgeking8100/kinesis-api:latest

This command:

  • Names the container kinesis-api
  • Mounts your local .env file and data/public/translations directories
  • Exposes port 8080
  • Runs in detached mode (-d)
  • Configures automatic restart
  • Uses the latest Kinesis API image

Using a Specific Version

If you need a specific version of Kinesis API, replace latest with a version number:

docker run --name kinesis-api \
  -v $(pwd)/.env:/app/.env \
  -v $(pwd)/data:/app/data \
  -v $(pwd)/public:/app/public \
  -v $(pwd)/translations:/app/translations \
  -p 8080:8080 -d \
  --restart unless-stopped \
  edgeking8100/kinesis-api:0.26.0

Available Registries

Kinesis API images are available from multiple registries:

RegistryImage
Docker Hubdocker.io/edgeking8100/kinesis-api:latest
Docker Hubdocker.io/edgeking8100/kinesis-api:<version>
Gitea Registrygitea.konnect.dev/rust/kinesis-api:latest
Gitea Registrygitea.konnect.dev/rust/kinesis-api:<version>

Rust Installation (For Development)

If you're a developer who wants to build from source or contribute to Kinesis API, you can install using Rust.

Prerequisites

  1. Install Rust (version 1.86 or newer)
  2. Install development tools for your platform:
    • Linux: build-essential package or equivalent
    • macOS: Xcode Command Line Tools
    • Windows: Microsoft Visual C++ Build Tools

Installation Steps

  1. Clone the repository:
git clone https://gitea.konnect.dev/rust/kinesis-api.git
cd kinesis-api/
  1. Create and configure the environment file:
cp .env.template .env
  1. Edit the .env file to set at minimum:

    • TMP_PASSWORD with a secure value
    • API_URL with your server address (e.g., "http://localhost:8080")
  2. Build and run the application:

cargo run --bin kinesis-api

For a production build:

cargo build --release --bin kinesis-api
./target/release/kinesis-api

Post-Installation Steps

After installation, you need to initialize Kinesis API:

  1. Access the initialization endpoint:

    • Make a GET request to <your-api-url>/init?code=code, or
    • Navigate to the web interface at <your-api-url>/web
  2. This creates a root user with:

    • Username: root
    • Password: Test123*
  3. Important: Change the default password immediately after first login

For more details on initialization, see the Initialization Guide.

Verifying the Installation

To confirm Kinesis API is running correctly:

  1. Open your browser and navigate to <your-api-url>/web
  2. You should see the Kinesis API login page
  3. Try logging in with the default credentials
  4. Check that you can access the API documentation at <your-api-url>/scalar

Troubleshooting

Common Issues

  1. Container won't start:

    • Check Docker logs: docker logs kinesis-api
    • Ensure ports aren't already in use
    • Verify directory permissions
  2. Can't access the web interface:

    • Confirm the container is running: docker ps
    • Check your firewall settings
    • Verify the URL and port configuration
  3. Database connection errors:

    • Check the data directory permissions
    • Verify your DB configuration variables

Getting Help

If you encounter issues not covered here:

Next Steps

After installation, proceed to:

  1. Initialize your installation
  2. Configure your setup
  3. Build a Simple Counter App

Initialization

After installing Kinesis API, you need to initialize the system before you can start using it. This one-time process creates the necessary database structures, default user, and configuration settings.

Initialization Methods

You can initialize Kinesis API using one of two methods:

  1. Open your web browser and navigate to your Kinesis API installation:

    http://your-domain-or-ip:8080/web
    
  2. You'll be presented with an initialization screen if the system hasn't been set up yet.

  3. Click the "Initialize" button to begin the process.

Initialize Button

  1. The system will create the necessary database structures and a default root user.

  2. Once initialization is complete, you'll be redirected to the login page.

Method 2: Using a REST API Request

If you prefer to initialize the system programmatically or via command line, you can use a REST API request:

curl "http://your-domain-or-ip:8080/init?code=code"

Or using any HTTP client like wget:

wget -qO - "http://your-domain-or-ip:8080/init?code=code"

A successful initialization will return a JSON response indicating that the system has been set up.

Default Root User

After initialization, a default root user is created with the following credentials:

  • Username: root
  • Password: Test123*

⚠️ IMPORTANT SECURITY NOTICE: You should change the default password immediately after your first login to prevent unauthorized access to your system.

What Gets Initialized

During initialization, Kinesis API sets up:

  • Database tables and their structures
  • Encryption key for securing sensitive data
  • System constraints for data validation
  • Default root user account
  • Core configuration settings
  • Initial projects structure
  • Collection templates
  • Media storage system

These components form the foundation of your Kinesis API installation, creating the necessary structure for you to start building your APIs.

Verifying Initialization

To verify that your system has been properly initialized:

  1. Try logging in with the default credentials.
  2. Check that you can access the dashboard.
  3. Navigate to the Users section to confirm the root user exists.
  4. Ensure the API documentation is accessible at /scalar.

Reinitializing the System

In most cases, you should never need to reinitialize your system after the initial setup. Reinitializing will erase all data and reset the system to its default state.

If you absolutely must reinitialize (for example, during development or testing):

  1. Stop the Kinesis API service:

    docker stop kinesis-api
    
  2. Remove the data directory:

    rm -rf data/
    
  3. Create a fresh data directory:

    mkdir -p data/
    
  4. Restart the service:

    docker start kinesis-api
    
  5. Follow the initialization steps again.

Next Steps

After successfully initializing your Kinesis API instance:

  1. Change the default password
  2. Configure your system settings
  3. Build a Simple Counter App

Setup

After initializing your Kinesis API instance and logging in for the first time, you will be presented with a setup screen where you can configure various system settings. This page explains each configuration option to help you make informed decisions.

The Setup Process

The setup process is a one-time configuration wizard that appears after the first login with your root account. It allows you to customize essential system settings before you start using Kinesis API.

Setup Screen

Note: All settings configured during this initial setup can be modified later from the Configs page accessible at /web/configs. If you're unsure about any setting, it's generally safer to keep the default value.

Configuration Options

Environment

Default value: dev

The environment context in which this platform is being used. Typically corresponds to development stages such as:

  • dev (Development)
  • staging (Staging/Testing)
  • prod (Production)

This setting has minimal impact on system behavior but helps to identify the instance's purpose.

Project Name

Default value: Kinesis API

The name by which this platform will be identified. This appears in the user interface, email templates, and other user-facing areas. You can customize this to match your organization or project name.

API URL

Default value: Detected from your installation

The base URL where your API endpoints are accessible. This should include the protocol (http/https) and domain name or IP address.

Example: https://api.example.com

If your Kinesis API instance is running behind a reverse proxy, this should be the publicly accessible URL, not the internal address.

API Prefix (API PRE)

Default value: Empty

An optional prefix for all API routes. When set, all API endpoints will be prefixed with this value.

Example: Setting this to /api/v1 would change endpoint paths from:

  • /user/login to /api/v1/user/login

Leave this empty unless you have a specific need for URL path prefixing, such as API versioning or integration with other systems.

Front URL

Default value: [API_URL]/web

The URL where users will access the web interface. If you're using the default web interface, this should be your API URL followed by /web.

Example: https://api.example.com/web

If you're using a custom frontend or have deployed the web interface separately, specify its URL here.

Init Code

Default value: code

The security code required when calling the /init endpoint to initialize the system. Changing this provides a small layer of security against unauthorized initialization.

Recommendation: Change this from the default value to something unique, especially in production environments.

JWT Expire

Default value: 3600 (1 hour)

The lifetime of JWT authentication tokens in seconds. After this period, users will need to log in again.

Common values:

  • 3600 (1 hour)
  • 86400 (24 hours)
  • 604800 (1 week)

Shorter times enhance security but require more frequent logins. Longer times improve user convenience but may increase security risks if tokens are compromised.

Upload Size

Default value: 2048 (2 MB)

The maximum allowed size for file uploads in kilobytes. Adjust based on your expected usage and server capacity.

Examples:

  • 1024 (1 MB)
  • 5120 (5 MB)
  • 10240 (10 MB)

Setting this too high could lead to server resource issues if users upload very large files.

CORS Whitelist

Default value: Empty

A comma-separated list of domains that are allowed to make cross-origin requests to your API.

Examples:

  • example.com,api.example.com (Allow specific domains)
  • * (Allow all domains - not recommended for production)

For security reasons, only whitelist domains that legitimately need to access your API.

SMTP Settings

These settings configure the email sending capabilities of Kinesis API, which are required for features like user registration and password reset.

SMTP Username

Default value: Empty

The username for authenticating with your SMTP server. This is typically your email address.

Example: notifications@example.com

SMTP From Username

Default value: Same as SMTP Username

The email address that will appear in the "From" field of emails sent by the system. If left empty, the SMTP Username will be used.

Example: no-reply@example.com

SMTP Password

Default value: Empty

The password for authenticating with your SMTP server.

SMTP Host

Default value: Empty

The hostname or IP address of your SMTP server.

Examples:

  • smtp.gmail.com (for Gmail)
  • smtp.office365.com (for Office 365)
  • smtp.mailgun.org (for Mailgun)

SMTP Port

Default value: 587

The port number used to connect to your SMTP server.

Common values:

  • 25 (Standard SMTP - often blocked by ISPs)
  • 465 (SMTP over SSL)
  • 587 (SMTP with STARTTLS - recommended)

SMTP Login Mechanism

Default value: PLAIN

The authentication mechanism used when connecting to the SMTP server.

Options:

  • PLAIN (Standard plain text authentication)
  • LOGIN (Alternative plain text authentication)
  • XOAUTH2 (OAuth 2.0-based authentication)

Most SMTP servers use PLAIN authentication. Only change this if your email provider specifically requires a different mechanism.

SMTP StartTLS

Default value: true (Checked)

Whether to use STARTTLS when connecting to the SMTP server. This upgrades an insecure connection to a secure one.

Most modern SMTP servers require this to be enabled for security reasons.

Testing SMTP Settings

Before completing the setup, you can test your SMTP configuration by clicking the "Check SMTP Credentials" button. This will attempt to connect to your SMTP server and verify that your credentials are correct.

Completing Setup

After configuring all settings, review your choices carefully before clicking "Complete Setup". The system will save your configuration and redirect you to the login page.

Important: It's recommended to restart your Kinesis API instance after completing the setup to ensure all settings take effect properly.

Skipping Setup

While it's possible to skip the setup process, this is not recommended as it may leave your system with incomplete or incorrect configuration. Only skip setup if you're an advanced user who plans to configure the system manually.

Modifying Settings Later

All settings configured during initial setup can be modified later from the Configs page at /web/configs. This allows you to make adjustments as your needs change without having to reinstall the system.

Configs Page

Changing the Default Password

To change the default password:

  1. Log in to the web interface using the default credentials.
  2. Navigate to User Settings (click on your username in the bottom-right corner then "Edit profile").

Go to edit profile

  1. Select "Change Password".

Change password

  1. Enter your new secure password twice.

Change Password Popup

  1. Click "Save" to confirm updating the password.

Next Steps

After completing the setup, you'll be ready to start using Kinesis API. Consider exploring these areas next:

Core Components

Kinesis API is built around a set of core components that work together to provide a complete API development and management solution. Understanding these components is essential for effectively using the platform.

Architecture Overview

At a high level, Kinesis API's architecture consists of:

  1. User Management System - Controls access and permissions
  2. Projects & Collections - Organizes your API resources
  3. Data Management - Stores and manipulates your application data
  4. API Routes - Exposes functionality through RESTful endpoints
  5. Supporting Systems - Provides additional functionality like events, media handling, and more

These components interact through the custom Kinesis DB database system and are exposed via both the web interface and API endpoints.

Key Components

Configs

The configuration system allows you to control how Kinesis API operates. This includes settings for:

  • Environment variables
  • SMTP settings for email
  • Security parameters
  • API endpoint behavior

Configs can be set during initial setup and modified later through the admin interface.

Constraints

Constraints define rules and validations that maintain data integrity across your API. They enforce:

  • Data type validations
  • Required fields
  • Value ranges and patterns
  • Relationship integrity
  • Business rules

Proper constraint management ensures your API behaves predictably and data remains valid.

Users

The user management system controls access to your Kinesis API instance. It includes:

  • User accounts with varying permission levels
  • Role-based access control (ROOT, ADMIN, AUTHOR, VIEWER)
  • Personal Access Tokens for API authentication
  • User profiles with customizable information

User management is critical for security and collaboration in multi-user environments.

Projects

Projects are the top-level organizational units in Kinesis API. They allow you to:

  • Group related APIs and resources
  • Separate concerns between different applications
  • Manage permissions at a high level
  • Create logical boundaries between different parts of your system

Each project can contain multiple collections, structures, and routes.

Collections

Collections are containers for data records of a specific type. They:

  • Organize your data into logical groups
  • Provide CRUD operations for data manipulation
  • Apply structure definitions to ensure data consistency
  • Enable efficient data retrieval and manipulation

Collections are the primary way you'll interact with data in Kinesis API.

Structures

Structures define the schema for your data. They:

  • Specify fields and their data types
  • Apply validation rules through constraints
  • Support nested and complex data models
  • Allow for custom structures with specialized behavior

Structures ensure your data follows a consistent format and meets your application's requirements.

Data

The data component provides interfaces for working with your stored information:

  • Creating, reading, updating, and deleting records
  • Querying and filtering data
  • Importing and exporting datasets
  • Managing relationships between records

Efficient data management is essential for building performant APIs.

Routes

Routes are the endpoints that expose your API functionality:

  • Created through the visual X Engine or programmatically
  • Define how requests are processed and responses are formed
  • Support various HTTP methods (GET, POST, PUT, DELETE, etc.)
  • Include the Playground for testing API behavior

Routes are the primary way external applications interact with your Kinesis API.

Events

The event system tracks important activities and changes:

  • Records system actions and user operations
  • Provides an audit trail for security and debugging

Events give you visibility into what's happening within your API.

Media

The media component handles file uploads and management:

  • Stores images, documents, and other file types
  • Offers integration with API responses

Media management allows your API to work with files and binary data.

REPL

The Read-Eval-Print Loop provides a command-line interface to interact with your API:

  • Execute commands directly against your database
  • Test operations without using the web interface
  • Perform advanced data manipulations

The REPL is a powerful tool for developers and administrators.

How Components Work Together

A typical workflow in Kinesis API might look like:

  1. Setup: Configure the system with appropriate settings
  2. Organization: Create projects to organize your work
  3. Data Modeling: Define structures and custom structures
  4. Storage: Create collections to group your data
  5. API Creation: Build routes to expose functionality
  6. Testing: Use the playground to verify behavior
  7. Deployment: Make your API available to users
  8. Monitoring: Track events and system performance

Understanding how these components interact is key to making the most of Kinesis API's capabilities.

Next Steps

To dive deeper into each component, follow the links to their dedicated documentation pages. A good starting point is understanding Users and Projects, as these form the foundation for most other operations in the system.

Configs

The configuration system in Kinesis API allows you to control and customize various aspects of the platform. These settings determine how the system behaves, what features are available, and how components interact with each other.

Accessing Configs

You can access and modify configuration settings from the Configs page in the web interface:

  1. Log in to your Kinesis API instance
  2. Navigate to /web/configs in your browser
  3. You'll see a list of all available configuration options

Configs Page

Configuration Categories

Configuration items in Kinesis API are grouped into several categories for the purpose of this documentation:

  • Environment Settings: Control the overall environment context
  • System Identifiers: Determine how the system identifies itself
  • URL Configuration: Define endpoints and access points
  • Security Settings: Control authentication and security parameters
  • SMTP Configuration: Settings for email functionality
  • Resource Limits: Define system resource boundaries
  • Feature Toggles: Enable or disable specific features

Core Configuration Options

Environment Settings

ENV

  • Default: dev
  • Description: The environment context in which the platform is being used.
  • Possible Values: dev (Development), staging (Staging/Testing), prod (Production)
  • Impact: Primarily used for identification and logging; has minimal impact on system behavior.

System Identifiers

PROJECT_NAME

  • Default: Kinesis API
  • Description: The name by which the platform is identified in the UI and emails.
  • Impact: Appears in the user interface, email templates, and other user-facing areas.

URL Configuration

API_URL

  • Default: Determined during installation
  • Description: The base URL where API endpoints are accessible.
  • Example: https://api.example.com
  • Impact: Used as the base for all API communication and for generating links.

API_PRE

  • Default: Empty
  • Description: A prefix for all API routes.
  • Example: Setting this to /api/v1 would change endpoint paths from /user/login to /api/v1/user/login
  • Impact: Affects how all API endpoints are accessed.

FRONT_URL

  • Default: [API_URL]/web
  • Description: The URL where users access the web interface.
  • Example: https://api.example.com/web
  • Impact: Used for redirects and generating links to the web interface.

Security Settings

INIT_CODE

  • Default: code
  • Description: The security code required when calling the /init endpoint.
  • Impact: Protects against unauthorized initialization of the system.

JWT_EXPIRE

  • Default: 3600 (1 hour)
  • Description: The lifetime of JWT authentication tokens in seconds.
  • Common Values:
    • 3600 (1 hour)
    • 86400 (24 hours)
    • 604800 (1 week)
  • Impact: Determines how frequently users need to reauthenticate.

CORS_WHITELIST

  • Default: Empty
  • Description: A comma-separated list of domains allowed to make cross-origin requests.
  • Example: example.com,api.example.com or * (allow all)
  • Impact: Critical for security; controls which external domains can access your API.

TOKEN_KEY

  • Default: Automatically generated during initialization
  • Description: The encryption key used for generating and validating JWT tokens.
  • Impact: Critical for security; changing this will invalidate all existing JWT tokens, forcing all users to log in again.

SMTP Configuration

These settings are required for user registration, password reset, and other email functionality.

SMTP_USERNAME

  • Default: Empty
  • Description: The username for SMTP server authentication.
  • Impact: Required for sending emails from the system.

SMTP_FROM_USERNAME

  • Default: Same as SMTP_USERNAME
  • Description: The email address that appears in the "From" field.
  • Impact: Affects how email recipients see the sender.

SMTP_PASSWORD

  • Default: Empty
  • Description: The password for SMTP server authentication.
  • Impact: Required for sending emails from the system.

SMTP_HOST

  • Default: Empty
  • Description: The hostname or IP address of the SMTP server.
  • Examples: smtp.gmail.com, smtp.office365.com
  • Impact: Determines which email server handles outgoing mail.

SMTP_PORT

  • Default: 587
  • Description: The port used to connect to the SMTP server.
  • Common Values: 25, 465, 587
  • Impact: Must match the requirements of your SMTP server.

SMTP_MECHANISM

  • Default: PLAIN
  • Description: The authentication mechanism for the SMTP server.
  • Options: PLAIN, LOGIN, XOAUTH2
  • Impact: Must match the authentication method supported by your SMTP server.

SMTP_STARTTLS

  • Default: true
  • Description: Whether to use STARTTLS when connecting to the SMTP server.
  • Impact: Security feature for encrypted email transmission.

SUPPORT_EMAIL

  • Default: support@kinesis.world
  • Description: The email address where contact form submissions are sent.
  • Impact: Determines where user inquiries are directed.

SMTP_FAKE_RECIPIENT

  • Default: hello@kinesis.world
  • Description: The email address used for testing SMTP configurations.
  • Impact: Used when testing email delivery; emails sent during testing will be addressed to this recipient.

Resource Limits

UPLOAD_SIZE

  • Default: 5120 (5 MB)
  • Description: Maximum allowed size for file uploads in kilobytes.
  • Impact: Affects media uploads and other file-related operations.

Feature Toggles

INITIAL_SETUP_DONE

  • Default: false (before setup), true (after setup)
  • Description: Indicates whether the initial setup process has been completed.
  • Impact: Controls whether the setup wizard appears on login.

Custom Configuration Items

In addition to the built-in configuration options, Kinesis API allows you to create and manage your own custom configuration settings:

Adding Custom Configs

  1. On the Configs page, click the "Add Config" button
  2. Enter a unique key name (use uppercase and underscores for consistency, e.g., MY_CUSTOM_CONFIG)
  3. Enter the value for your configuration
  4. Click "Save" to create the new configuration

Important Disclaimers

⚠️ Caution: Modifying or deleting configuration items can have serious consequences for your Kinesis API instance. Incorrect changes may lead to system instability, security vulnerabilities, or complete system failure. Be particularly careful when modifying:

  • Security-related settings like TOKEN_KEY
  • URL configurations like API_URL or API_PRE
  • SMTP settings that enable email functionality

Always test changes in a non-production environment first, and ensure you understand the purpose and impact of each configuration item before modifying it.

Managing Configuration Items

Modifying Configs

To modify a configuration:

  1. Find the config you want to change in the list
  2. Click the edit button (pencil icon) next to it
  3. Enter the new value in the input field
  4. Click "Save" to apply the change

Most configuration changes take effect immediately, but some may require a system restart.

Deleting Configs

You can delete configuration items that are no longer needed:

  1. Find the config you want to delete in the list
  2. Click the delete button (trash icon) next to it
  3. Confirm the deletion when prompted

Configuration History

Kinesis API maintains a history of configuration changes, including:

  • What was changed
  • When it was changed
  • Who made the change

This audit trail is valuable for troubleshooting and compliance purposes.

Best Practices

  1. Environment-Specific Settings: Use different configuration items for development, staging, and production environments
  2. Security Configs: Regularly rotate sensitive settings like INIT_CODE
  3. SMTP Testing: Always test email settings after changes using the "Test SMTP" function
  4. Documentation: Keep a record of non-default configuration items and why they were changed
  5. Review Regularly: Periodically review configuration items to ensure they remain appropriate

Constraints

Constraints in Kinesis API are system-level validation rules that enforce data integrity across different components of the system. They organize into a hierarchical structure that provides comprehensive validation for various system elements.

Understanding the Constraint System

The constraint system in Kinesis API has two primary levels:

  1. Constraints: Top-level categories that apply to a group of related items (e.g., "config", "user", "project")
  2. Constraint Properties: Specific validation rules within each constraint (e.g., "name" and "value" for configs)

This hierarchical approach ensures consistent validation across all aspects of the system while maintaining flexibility for different data types.

Constraint Structure

Constraints

Constraints are the top-level categories that group related validation rules. Examples include:

  • CONFIG: Applies to configuration settings
  • USER: Applies to user account data
  • PROJECT: Applies to project details
  • COLLECTION: Applies to collection information

Each constraint contains one or more constraint properties.

Constraint Properties

Constraint properties are the specific elements within a constraint that have defined validation rules. For example, the CONFIG constraint includes properties like:

  • name: Validates the configuration key name
  • value: Validates the configuration value

Each constraint property has its own set of validation rules:

  • Character Type:

    • Alphabetical: Only letters (a-z, A-Z) are allowed
    • Numerical: Only numbers (0-9) are allowed
    • Alphanumerical: Both letters and numbers are allowed
  • Character Restrictions:

    • Allow List: Specific characters that are permitted
    • Deny List: Specific characters that are forbidden
  • Length Restrictions:

    • Min Length: Minimum number of characters required
    • Max Length: Maximum number of characters allowed

Viewing Constraints

Users with appropriate permissions can view the constraints system:

  1. Log in to your Kinesis API instance
  2. Navigate to /web/constraints in your browser
  3. You'll see a list of all system constraints

Constraints Page

Each constraint can be expanded to show its associated constraint properties.

Viewing Constraint Properties

To view the properties of a specific constraint:

  1. On the Constraints page, click on a constraint name
  2. You'll see a list of properties associated with that constraint
  3. Each property displays its:
    • Character type (alphabetical, numerical, alphanumerical)
    • Allow/deny lists
    • Current min/max length settings

Modifying Constraints

⚠️ Important: End users can only modify the minimum and maximum length settings of constraint properties. The character types, allow/deny lists, and other fundamental aspects are locked to preserve system integrity.

Modifying Min/Max Values

To modify the min/max settings of a constraint property:

  1. Navigate to the Constraints page (/web/constraints)
  2. Click on a constraint to view its properties
  3. Find the constraint property you want to modify
  4. For the minimum value:
    • Click the edit (pencil) icon next to the Min value
    • Enter the new minimum value
    • Click "Save" to apply the change
  5. For the maximum value:
    • Click the edit (pencil) icon next to the Max value
    • Enter the new maximum value
    • Click "Save" to apply the change

Best Practices

When modifying constraint property length settings:

  1. Maintain Balance: Set minimum lengths to ensure data quality while setting maximum lengths to prevent excessive data
  2. Consider Real-World Usage: Adjust length limits based on realistic use cases
  3. Test After Changes: After modifying constraints, test affected components to ensure proper functionality
  4. Document Changes: Keep a record of any constraint modifications for future reference
  5. Preserve Relationships: Ensure related constraint properties have compatible settings

Users

The Users page in Kinesis API provides comprehensive user management capabilities for administrators. This interface allows root users to view, add, modify, and delete user accounts across the platform.

Access Control

Important: The Users management page is only accessible to users with the ROOT role. Other users attempting to access this page will be redirected to the dashboard.

Accessing the Users Page

To access the Users management page:

  1. Log in with a ROOT user account
  2. Navigate to /web/users in your browser or use the navigation menu

User Interface Overview

Users Management Page

The Users management interface includes:

  • A searchable list of all users in the system
  • Pagination controls for navigating through large user lists
  • Actions for adding new users, changing roles, and deleting accounts
  • User details including ID, username, name, email, and role

Viewing and Filtering Users

User List

The main section of the page displays a table of users with the following information:

  • ID: The unique identifier for each user
  • Username: The login name (links to user profile)
  • Name: The user's full name (first and last name)
  • Email: The user's email address
  • Role: The user's permission level (ROOT, ADMIN, AUTHOR, or VIEWER)
  • Actions: Buttons for available actions on each user

Filtering Users

To find specific users:

  1. Use the search box at the top of the user list
  2. Type any part of the username, name, email, or role
  3. The list will automatically filter to show matching users

Pagination

For systems with many users:

  1. Navigate between pages using the pagination controls
  2. The page displays up to 15 users at a time

User Roles

Kinesis API implements a role-based access control system with four permission levels:

RoleDescription
ROOTFull system access, including user management and critical system settings
ADMINAdministrative access to most features, but cannot manage users and configs
AUTHORCan create and modify content but has limited administrative access
VIEWERRead-only access to most parts of the system

Adding New Users

Prerequisite: SMTP settings must be properly configured for the user registration process to work. See Configs for details on setting up email.

To add a new user:

  1. Click the "Add a New User" button at the top of the page
  2. Fill in the required information:
    • First Name
    • Last Name
    • Username
    • Email Address
  3. Select the appropriate role for the user
  4. Click "Create"

Behind the Scenes

When you create a new user:

  1. The system generates a secure random password
  2. An email is sent to the new user with their:
    • Username
    • Generated password
    • Login instructions
  3. The password is hashed before storage and cannot be retrieved later

Add User Modal

Changing User Roles

To change a user's role:

  1. Find the user in the list
  2. Click the role change button (star icon)
  3. Select the new role from the available options
  4. Confirm the change

Note that:

  • You cannot change the role of ROOT users
  • You cannot downgrade your own ROOT account

Change Role Modal

Deleting Users

To delete a user account:

  1. Find the user in the list
  2. Click the delete button (trash icon)
  3. Confirm the deletion in the modal that appears

Important considerations:

  • User deletion is permanent and cannot be undone
  • All user data and associated content will be removed
  • ROOT users cannot be deleted through this interface
  • You cannot delete your own account

Delete User Modal

Password Management

The Kinesis API user management system handles passwords securely:

  • Passwords for new users are automatically generated with strong entropy
  • Passwords must contain lowercase letters, uppercase letters, numbers, and special characters
  • Passwords are never stored in plain text—only secure hashes are saved
  • Users can reset their passwords via the "Forgot Password" functionality
  • Admin users cannot see or reset passwords directly, only trigger the password reset process

Personal Access Tokens

Personal Access Tokens (PATs) provide a secure way to authenticate with the Kinesis API programmatically. They allow you to interact with the API without using your username and password, making them ideal for automated scripts, external applications, and CI/CD pipelines.

Understanding Personal Access Tokens

PATs function similarly to passwords but have several advantages:

  • Fine-grained Permissions: Limit tokens to specific actions and resources
  • Limited Lifespan: Set expiration dates to reduce security risks
  • Independent Revocation: Revoke individual tokens without affecting other access methods
  • Traceability: Track which token is used for which operations

Accessing the PAT Management Page

To manage your Personal Access Tokens:

  1. Log in to your Kinesis API account
  2. Navigate to /web/pats in your browser or use the sidebar navigation

PAT Management Page

Creating a New Token

To create a new Personal Access Token:

  1. Click the "Create a New Personal Access Token" button

  2. Fill in the required information:

    • Name: A descriptive name to identify the token's purpose
    • Valid From: The date and time when the token becomes active
    • Valid To: The expiration date and time for the token
    • Permissions: Select the specific actions this token can perform
  3. Click "Create" to generate the token

Create PAT Modal

Token Display - Important Notice

⚠️ Critical Security Information: When a token is first created, the actual token value is displayed only once. Copy this token immediately and store it securely. For security reasons, Kinesis API only stores a hashed version of the token and cannot display it again after you leave the page.

Managing Existing Tokens

The PAT management page displays all your existing tokens with their details:

  • Name: The descriptive name you assigned
  • ID: The unique identifier for the token
  • Valid From: The start date of the token's validity period
  • Valid Until: The expiration date of the token
  • Rights: The permissions assigned to the token

Updating Token Details

You can modify several aspects of an existing token:

  1. Name: Click the pencil icon next to the token name
  2. Valid From: Click the pencil icon next to the start date
  3. Valid Until: Click the pencil icon next to the expiration date
  4. Rights: Click the star icon to modify permissions

Note: For security reasons, you cannot view or modify the actual token value after creation. If you need a new token value, you must create a new token and delete the old one.

Deleting Tokens

To revoke access for a token:

  1. Click the trash icon next to the token you want to delete
  2. Confirm the deletion in the modal that appears

Once deleted, a token cannot be recovered, and any applications using it will lose access immediately.

Token Permissions

Kinesis API uses a granular permission system for PATs. When creating or editing a token, you can select specific permissions that control what actions the token can perform:

Permission CategoryExamples
User ManagementUSER_FETCH, USER_CREATE, USER_UPDATE, USER_DELETE
Config ManagementCONFIG_FETCH, CONFIG_CREATE, CONFIG_UPDATE
Project ManagementPROJECT_FETCH, PROJECT_CREATE, PROJECT_UPDATE
Data OperationsDATA_FETCH, DATA_CREATE, DATA_UPDATE, DATA_DELETE
Media ManagementMEDIA_FETCH, MEDIA_CREATE, MEDIA_UPDATE
Route ManagementROUTING_FETCH, ROUTING_CREATE_UPDATE

The available permissions depend on your user role. For example, ROOT users have access to all permissions, while other roles have a more limited set.

Using Personal Access Tokens

To use a PAT in API requests:

GET /user/fetch HTTP/1.1
Host: api.example.com
Authorization: Bearer your-token-here

Include the token in the Authorization header with the Bearer prefix for all authenticated requests.

Best Practices

  1. Use Descriptive Names: Give each token a name that identifies its purpose or the application using it
  2. Set Appropriate Expirations: Use shorter lifespans for tokens with broader permissions
  3. Limit Permissions: Grant only the specific permissions needed for each use case
  4. Rotate Regularly: Create new tokens and delete old ones periodically
  5. Secure Storage: Store tokens securely, treating them with the same care as passwords
  6. Monitor Usage: Regularly review your active tokens and delete any that are no longer needed

Token Security

Personal Access Tokens are equivalent to your password for the granted permissions. To keep your account and data secure:

  • Never share tokens in public repositories, client-side code, or insecure communications
  • Use environment variables or secure secret management systems to store tokens
  • Set expiration dates appropriate to the use case (shorter is better)
  • Delete tokens immediately if they might be compromised
  • API Reference - Learn how to use tokens with API endpoints
  • Users - Overview of user management
  • Security - Additional security considerations

Profile

User profiles in Kinesis API provide a way for users to personalize their identity within the platform and share information with others. This page explains how to view, edit, and manage user profiles.

Understanding User Profiles

Each user in Kinesis API has a profile that includes:

  • Basic Information: Name, username, and email address
  • Profile Picture: A visual representation of the user
  • Bio: A free-form text area for users to describe themselves
  • Links: Customizable links to external sites (social media, portfolio, etc.)
  • Visibility Settings: Controls who can view the profile

Profiles help users identify each other within the platform and provide context about their roles and expertise.

Viewing Profiles

Accessing Your Own Profile

To view your own profile:

  1. Click your username in the navigation bar
  2. Select "View Profile" from the dropdown menu

Alternatively, navigate directly to /web/user?id=your_username.

Viewing Other Users' Profiles

To view another user's profile:

  1. Click on their username anywhere it appears in the interface
  2. Navigate directly to /web/user?id=their_username

Note that you can only view profiles of other users if:

  • Their profile is set to public, or
  • You are logged in and have appropriate permissions

Profile Content

A typical user profile displays:

User Profile

  1. Profile Picture: Either an uploaded image or initials if no image is provided
  2. Full Name: The user's first and last name
  3. Username: Prefixed with @ (e.g., @john_doe)
  4. Email Address: Clickable to send an email
  5. Bio: The user's self-description
  6. Links: Icons linking to external sites

Profile Visibility

Profiles can be set to either:

  • Public: Visible to anyone, even unauthenticated visitors
  • Private: Only visible to authenticated users of the platform

This setting can be changed in the user settings page.

Editing Your Profile

To edit your profile:

  1. Click your username in the navigation bar
  2. Select "Edit Profile" from the dropdown menu
  3. The settings page opens with the Profile tab active

Profile Settings

The profile settings page allows you to edit:

Profile Settings

Profile Picture

  • Upload: Click "Upload" to select an image from your device
  • Remove: Click "Remove" to delete your current profile picture and revert to initials

Profile pictures are automatically resized and optimized for display.

Basic Information

You can edit the following fields:

  • First Name: Your given name
  • Last Name: Your family name
  • Username: Your unique identifier on the platform (must be unique)
  • Email Address: Your contact email (must be unique)

Bio

The bio field supports plain text where you can describe yourself, your role, or any information you wish to share. Line breaks are preserved in the display.

Account Settings

The Account tab provides additional options:

  • ID: Your unique numeric identifier (non-editable)
  • Role: Your assigned role in the system (non-editable)
  • Account Type: Toggle between public and private profile visibility
  • Change Password: Update your login password
  • Log Out: End your current session

Two-Factor Authentication (2FA)

The 2FA tab allows you to add an additional security layer to your account. Two-factor authentication requires both your password and a time-based verification code when logging in.

2FA Settings

Setting Up 2FA

  1. Navigate to the 2FA tab in Settings
  2. Click the "Setup 2FA" button
  3. A QR code and secret key will appear
  4. Scan the QR code with an authenticator app (such as Google Authenticator, Authy, or Microsoft Authenticator)
  5. Click "Verify 2FA" and enter the 6-digit code displayed in your authenticator app
  6. After successful verification, your 2FA is active

Recovery Codes

When setting up 2FA, you'll receive a set of recovery codes. These one-time use codes allow you to access your account if you lose your authenticator device.

  • Click "Download Recovery Codes" to save these codes
  • Store them securely in a password manager or other safe location
  • Each code can be used only once

Disabling 2FA

If you need to disable 2FA:

  1. Navigate to the 2FA tab in Settings
  2. Click the "Disable 2FA" button
  3. Confirm your decision when prompted

Security Best Practices

  • Never share your 2FA secret key or QR code with anyone
  • Store recovery codes securely and separately from your password
  • If you get a new device, set up 2FA again before disposing of your old device
  • Consider using a password manager that supports TOTP (Time-based One-Time Password) as a backup

The Links tab allows you to manage external links displayed on your profile:

  1. Click "New" to add a link
  2. For each link, you can specify:
    • URL: The full web address (must include https://)
    • Icon: A Remix Icon class name (e.g., ri-github-fill)
    • Name: A label for the link
  3. Click "Save" to update all links at once

You can add multiple links and delete existing ones as needed.

Appearance

The Appearance tab lets you customize the interface:

  • Theme: Select from various color themes
  • Sidebar: Choose which items appear in your navigation sidebar

Profile Usage Best Practices

  1. Complete Your Profile: A complete profile helps others identify and contact you
  2. Appropriate Content: Keep profile information professional and relevant
  3. Regular Updates: Keep your information current, especially contact details
  4. Consider Visibility: Set appropriate visibility based on your role and preferences

Projects

Projects in Kinesis API serve as the primary organizational units that group related collections, structures, and routes. They provide logical separation between different API initiatives and help maintain clear boundaries for access control and resource management.

Accessing Projects

The Projects page can be accessed by navigating to /web/projects in your browser after logging in.

Projects Page

Projects Visibility

The visibility of projects depends on your user role:

  • ROOT Users: Can see all projects created within the Kinesis API instance
  • All Other Users: Can only see projects they are members of

This role-based visibility ensures that users only have access to projects relevant to their work.

Project List Interface

The Projects interface includes:

  • A filterable, paginated list of projects
  • Project cards showing key information
  • Action buttons for various operations
  • A creation button for ROOT and ADMIN users

Each project card displays:

  • Project name
  • Project ID
  • Description
  • API path
  • Number of members

Creating a New Project

ROOT and ADMIN users can create new projects:

  1. Click the "Create a new project" button
  2. Fill in the required information:
    • Name: A human-readable name for the project
    • ID: A unique identifier (used in URLs and API paths)
    • Description: A brief explanation of the project's purpose
    • API Path: The base path for all API routes in this project

Create Project Modal

Project ID Requirements

Project IDs must:

  • Be unique across the Kinesis API instance
  • Contain only lowercase letters, numbers, and underscores
  • Start with a letter
  • Be between 3 and 50 characters long

API Path Conventions

API paths should:

  • Start with a forward slash (/)
  • Follow RESTful conventions
  • Be unique across all projects
  • Reflect the project's purpose

Example: /api/v1/inventory for an inventory management project

Project Member Management

Projects use member-based access control. Only users who are members of a project can access its resources.

Viewing Project Members

To view the members of a project:

  1. Click the "View Members" button (user icon) on the project card
  2. A modal will display all users who have access to the project

Adding Project Members

ROOT and ADMIN users can add members to a project:

  1. Click the "Add Member" button (plus user icon)
  2. Select users from the list to add them to the project
  3. Click the add icon next to each user you want to add

Removing Project Members

ROOT and ADMIN users can remove members from a project:

  1. Click the "Remove Member" button (user with x icon)
  2. Select users from the list to remove them from the project
  3. Click the remove icon next to each user you want to remove

Accessing a Project

To view or manage a specific project:

  1. Click on the project name or use the "View Project" button
  2. You'll be taken to the project detail page (/web/project?id=[project_id])

Project Page

The project detail page provides access to all collections within that project. Most operations described above (such as managing members and deleting the project) can also be performed directly from this page.

Project Page Capabilities

From the project detail page, you can:

  • View and edit the project's information (name, description, API path)
  • Manage project members (add or remove users)
  • Delete the project entirely
  • View and interact with all collections belonging to the project
  • Create new collections directly from this interface
  • Access collections to manage their data and structures

The centralized project interface makes it convenient to perform all project-related operations without returning to the main projects list, streamlining your workflow when working within a specific project context.

Deleting Projects

ROOT and ADMIN users can delete projects:

  1. Click the "Delete Project" button (trash icon) on the project card
  2. Confirm the deletion in the modal that appears

⚠️ Warning: Deleting a project permanently removes all collections, structures, data, and routes associated with it. This action cannot be undone.

Filtering Projects

To find specific projects:

  1. Use the filter input at the top of the project list
  2. Type any part of the project name, ID, description, or API path
  3. The list will automatically filter to show matching projects

Pagination

For systems with many projects:

  1. Navigate between pages using the pagination controls
  2. The page displays up to 8 projects at a time

Project Lifecycle

Projects typically follow this lifecycle:

  1. Creation: A ROOT or ADMIN user creates the project
  2. Configuration: Collections and structures are defined
  3. Development: API routes are created using the X Engine
  4. Maintenance: The project evolves with new features and updates
  5. Retirement: When no longer needed, the project is deleted
  • Collections - Manage data collections within projects
  • Structures - Define data structures for collections
  • Data - Work with data stored in collections
  • Routes - Create API endpoints for project resources

Collections

Collections in Kinesis API are containers for related data within a project. They serve as the primary way to organize, structure, and access your API's data resources.

Understanding Collections

Collections in Kinesis API work similarly to tables in a traditional database or collections in document databases. Each collection:

  • Belongs to a specific project
  • Contains data records that conform to defined structures
  • Can be accessed via API endpoints

Collections are the building blocks for storing and organizing your application data in a logical, accessible manner.

Accessing Collections

Collections can be accessed in two ways:

  1. Via the Web Interface: Navigate to /web/project?id=[project_id] and view the collections listed for that project
  2. Via the API: Use the collections endpoints with appropriate authentication

Collection Management Interface

Project Page

The collection management interface within a project includes:

  • A list of all collections in the project
  • Tools for creating new collections
  • Access to individual collection settings and structures

Creating a Collection

To create a new collection:

  1. Navigate to a project page
  2. Click the "Create a New Collection" button
  3. Fill in the required information:
    • Name: A human-readable name for the collection
    • ID: A unique identifier (used in API paths and queries)
    • Description: An optional description of the collection's purpose
  4. Click "Create" to save the collection

Create Collection Modal

Collection ID Requirements

Collection IDs must:

  • Be unique within a project
  • Contain only lowercase letters, numbers, and underscores
  • Start with a letter
  • Be between 3 and 50 characters

Collection Details Page

Clicking on a collection name takes you to the collection details page, where you can:

  1. View and edit collection information
  2. Manage structures within the collection
  3. Manage custom structures
  4. Delete the collection if needed

Collection Page

Updating Collection Information

You can update a collection's information by:

  1. Clicking the appropriate edit button next to the collection detail
  2. Modifying the information in the modal that appears
  3. Saving your changes

Available updates include:

  • Changing the collection name
  • Modifying the collection description

Note that the collection ID cannot be changed after creation as it would break existing data references.

Managing Structures

Each collection contains structures that define the fields of data it can store. From the collection details page, you can:

  • Create new structures using the "Create New" button in the Structures section
  • Edit existing structures by clicking the edit icon
  • Delete structures when they're no longer needed

See the Structures documentation for more details on creating and managing structures.

Managing Custom Structures

Custom structures allow you to create complex, reusable data templates. From the collection details page, you can:

  • Create new custom structures
  • Navigate to existing custom structures to edit their components
  • Delete custom structures when they're no longer needed

See the Custom Structures documentation for more information.

Deleting a Collection

To delete a collection:

  1. Navigate to the collection details page
  2. Click the "Delete Collection" button
  3. Confirm the deletion in the modal that appears

⚠️ Warning: Deleting a collection permanently removes all its structures and associated data. This action cannot be undone.

Data Operations

Once you've set up a collection with appropriate structures, you can perform various operations on its data:

  • Create new records
  • Retrieve records through queries
  • Update existing records
  • Delete records

See the Data documentation for details on working with collection data.

Best Practices

For optimal collection management:

  1. Logical Organization: Group related data into collections
  2. Clear Naming: Use descriptive names and IDs that reflect the collection's purpose
  3. Documentation: Add thorough descriptions to help team members understand the collection's use
  4. Structure Planning: Design your structures carefully before adding significant amounts of data
  5. Regular Maintenance: Periodically review collections to ensure they remain relevant and well-organized
  • Projects - Information about project management
  • Structures - Defining data structures for collections
  • Custom Structures - Creating complex, reusable structures
  • Data - Working with collection data

Structures

Structures in Kinesis API define individual fields within collections. Unlike traditional database systems where you might define an entire schema at once, Kinesis API uses structures to represent each individual field in your data model. This granular approach offers greater flexibility and reusability.

Understanding Structures

Each structure represents a single field that can be used within collections. Key characteristics of structures include:

  • Each structure defines exactly one field with its data type and validation rules
  • Structures can be reused across multiple collections
  • They establish specific validation rules for individual data elements
  • They support both simple and complex data types

Database Analogy: If a collection is like a database table, a structure is like a single column in that table.

Managing Structures from Collections

Structures are created, edited, and deleted from within the collection interface. To access and manage structures:

  1. Navigate to a project page
  2. Click on a collection to view its details
  3. Locate the "Structures" section on the collection details page

Collection Page

Creating a Structure

To create a new structure:

  1. From the collection details page, click the "Create New" button in the Structures section
  2. Fill in the required information:
    • Name: A unique identifier for the structure (field name)
    • Description: Optional explanation of the structure's purpose
  3. In the same modal, define the properties for this field:
    • Select the appropriate data type
    • Configure validation rules
    • Set default values if needed
  4. Click "Create" to save the structure

Create Structure Modal

Structure Properties

When creating or editing a structure, you configure properties for that specific field:

  • Type: The type of data to be represented (Text, Email, Password, Markdown, Integer, etc.)
  • Required: Whether this field must be present in all records
  • Unique: Whether values must be unique across all records
  • Default: An optional default value
  • Min/Max: Constraints for strings (length) or numbers (value)
  • Pattern: A regular expression pattern for validation (strings only)

Field Types

Structures support various field types to model different kinds of data:

TypeDescriptionExample
TEXTBasic text data for names, descriptions, etc."Hello World"
EMAILEmail addresses with validation"user@example.com"
PASSWORDSecurely stored password strings"********"
MARKDOWNRich text with markdown formatting"# Heading\n\nParagraph with **bold** text"
INTEGERWhole number values42, -7
FLOATDecimal number values3.14159, -2.5
ENUMValue from a predefined list of options"pending" (from ["pending", "approved", "rejected"])
DATECalendar date values"2023-04-15"
DATETIMEDate and time values with timezone"2023-04-15T14:30:00Z"
MEDIAReferences to uploaded media files"uploads/image-123.jpg"
BOOLEANTrue/false valuestrue, false
UIDSystem-generated unique identifier"5f8d43e1b4ff..."
JSONArbitrary JSON data structures{"name": "John", "tags": ["important", "new"]}

Note on List/Array Types: Kinesis API supports array/list structures through the "array" flag. When enabled for a structure, it allows storing multiple values of the same type. Each element in the array must conform to the structure's validation rules (e.g., min/max values). In the interface, array elements are split by commas. This approach maintains type validation while providing flexibility for storing multiple related values within a single field. Arrays are useful for simple collections of values; for more complex relationships, consider using separate collections with UID references (similar to foreign keys in traditional databases).

Editing Structures

To modify an existing structure:

  1. From the collection details page, find the structure in the list
  2. Click the edit icon (pencil) next to the structure
  3. Make your changes in the structure editor
  4. Save your changes

Note: Modifying structures may affect existing data. Be cautious when changing field types or removing fields that contain data.

Deleting Structures

To remove a structure:

  1. From the collection details page, find the structure in the list
  2. Click the delete icon (trash) next to the structure
  3. Confirm the deletion when prompted

⚠️ Warning: Deleting a structure will affect any data that uses it. Ensure that no critical data depends on the structure before deleting.

Best Practices

When designing structures:

  1. Use Clear Naming: Choose descriptive, consistent names for structures and fields
  2. Start Simple: Begin with minimal structures and evolve them as needed
  3. Consider Validation: Use constraints to ensure data quality
  4. Think About Relationships: Plan how structures will relate to each other
  5. Document Your Design: Add clear descriptions to structures and fields
  6. Versioning Strategy: Consider how to handle structure changes over time
  • Collections - Managing collections that contain structures
  • Custom Structures - Creating reusable structure templates
  • Data - Working with data based on structures

Custom Structures

Custom structures in Kinesis API provide a way to create reusable, complex data templates that can be referenced across multiple collections. They function as user-defined object types that encapsulate related fields, enabling more sophisticated data modeling than is possible with basic structures alone.

Understanding Custom Structures

While regular structures define individual fields within a collection, custom structures allow you to define composite data types with multiple fields grouped together. Think of them as objects or complex data types that:

  • Act as templates for complex data
  • Can be reused across multiple collections
  • Encapsulate related fields
  • Support nested data models
  • Enable modular data design

Accessing Custom Structures

Custom structures are managed through the collection interface but have their own dedicated pages:

  1. Navigate to a project page
  2. Select a collection
  3. Find the "Custom Structures" section
  4. Click on a custom structure name to access its dedicated page

Collection Page

Creating a Custom Structure

To create a new custom structure:

  1. From a collection page, find the "Custom Structures" section
  2. Click the "Create New" button
  3. Fill in the required information:
    • Name: A descriptive name for the custom structure
    • ID: A unique identifier used in API references
    • Description: An explanation of the custom structure's purpose
  4. Click "Create" to save the custom structure

Create Custom Structure Modal

Custom Structure Detail Page

After creating a custom structure, you can access its detail page by clicking on its name. The custom structure page allows you to:

  • View and edit the custom structure's information
  • Add structures (fields) to the custom structure
  • Manage existing structures
  • Delete the custom structure

Custom Structure Page

Managing Custom Structure Information

From the custom structure page, you can modify various aspects:

Updating Custom Structure ID

  1. Click the "Update Custom Structure ID" button
  2. Enter the new ID in the modal
  3. Click "Submit" to save the changes

Updating Custom Structure Name

  1. Click the "Update Custom Structure Name" button
  2. Enter the new name in the modal
  3. Click "Submit" to save the changes

Updating Custom Structure Description

  1. Click the "Update Custom Structure Description" button
  2. Enter the new description in the modal
  3. Click "Submit" to save the changes

Adding Structures to a Custom Structure

To add a field to a custom structure:

  1. From the custom structure page, click the "Create New" button in the Structures section
  2. Fill in the field details:
    • Name: Field name
    • ID: Field identifier
    • Description: Field description
    • Type: Data type selection
    • Additional properties like min/max values, default value, etc.
  3. Click "Create" to add the field

The process for adding fields to a custom structure is identical to adding structures to a collection. See the Structures documentation for more details on field types and properties.

Example Use Cases

Custom structures are particularly useful for:

Address Information

Create an "Address" custom structure with fields like:

  • Street
  • City
  • State/Province
  • Postal Code
  • Country

This can then be used in "Customer", "Shipping", "Billing" and other collections.

Contact Details

Build a "Contact Info" custom structure containing:

  • Email
  • Phone
  • Website
  • Social Media Profiles

Product Specifications

Define "Product Specs" with varying attributes based on product type:

  • Dimensions
  • Weight
  • Material
  • Technical Specifications

Modifying Custom Structures

When you modify a custom structure by adding, changing, or removing fields, these changes affect all places where the custom structure is used. This provides a powerful way to update data models across your entire API without making changes in multiple locations.

However, be aware that:

  • Removing fields from a custom structure could impact existing data
  • Changing field types might require data migration
  • Adding required fields to an existing custom structure could cause validation errors

Deleting Custom Structures

To delete a custom structure:

  1. From the custom structure page, click the "Delete Custom Structure" button
  2. Confirm the deletion in the modal that appears

⚠️ Warning: Deleting a custom structure will affect all places where it's used. Ensure it's not referenced by any other structures before deletion.

Best Practices

When working with custom structures:

  1. Descriptive Naming: Use clear, descriptive names that indicate the custom structure's purpose
  2. Logical Grouping: Group related fields that naturally belong together
  3. Appropriate Granularity: Create custom structures at the right level of detail
  4. Reusability: Design custom structures to be reusable across multiple collections
  5. Documentation: Add thorough descriptions to help team members understand the purpose and usage of each custom structure
  6. Versioning Strategy: Consider how to handle changes to custom structures over time
  • Collections - Managing collections that contain custom structures
  • Structures - Understanding basic structures and field types
  • Data - Working with data based on complex structures

Data

Data objects in Kinesis API represent the actual content stored within your collections. Each data object is associated with a specific collection and contains values for the structures (fields) defined within that collection. This page explains how to create, view, edit, and manage your data.

Understanding Data in Kinesis API

In Kinesis API, data is organized as follows:

  • Projects contain Collections
  • Collections define Structures (fields)
  • Data Objects store values for these structures
  • Each data object contains Data Pairs (structure-value associations)

A data pair links a specific structure (field) with its corresponding value. For example, if you have a "title" structure, a data pair might associate it with the value "My First Article".

🔒 Security Note: All data pairs in Kinesis API are encrypted by design by default. This ensures your data remains secure both at rest and during transmission, providing built-in protection for sensitive information without requiring additional configuration.

Accessing Data Management

To access data management in Kinesis API using the web UI:

  1. Navigate to /web/data in your browser or click "Data" in the main navigation menu
  2. You'll see a list of all projects you have access to
  3. Click on a project to go to /web/data/project?id=project_id and view its collections
  4. Click on a collection to go to /web/data/collection?project_id=project_id&id=collection_id and view its data objects

This hierarchical navigation allows you to drill down from projects to collections to individual data objects.

Data Page

Browsing Projects and Collections

The data management interface follows a logical structure that mirrors your data organization:

Projects Level

At /web/data, you'll see all projects you have access to:

  • Each project is displayed as a card with its name and description
  • You can filter projects using the search box at the top of the page
  • Click on any project to navigate to its collections

Collections Level

At /web/data/project?id=project_id, you'll see all collections within the selected project:

  • Each collection is displayed with its name and description
  • Click on any collection to view its data objects

Viewing Data Objects

From a collection page, you can see all data objects within that collection:

Data Page

Each data object card displays:

  • The object's nickname (if set) or ID
  • Number of structures and custom structures
  • Action buttons for various operations

Data Object Details

To view the details of a data object:

  1. Click the view button (open link icon) or its title on a data object card
  2. You'll be taken to a page displaying all structure values
  3. Regular structures are displayed at the top
  4. Custom structures are displayed below, grouped by their custom structure type

View Data

Creating Data Objects

Users with ROOT, ADMIN, or AUTHOR roles can create new data objects:

  1. From a collection page, click the "Create New" button
  2. Enter an optional nickname for the data object
  3. Fill in values for each structure (field)
  4. For custom structures, fill in values for their component fields
  5. Click "Create" to save the data object

Create Data

Structure Value Types

When creating or editing data objects, different structure types accept different kinds of input:

Structure TypeInput MethodNotes
TEXTText fieldRegular text input
EMAILEmail fieldValidates email format
PASSWORDPassword fieldMasked input with show/hide option
MARKDOWNMarkdown editorWith formatting toolbar
INTEGERNumber inputWhole numbers only
FLOATNumber inputDecimal values allowed
ENUMDropdownSelect from predefined options
DATEDate pickerCalendar interface
DATETIMEDate-time pickerDate and time selection
MEDIAFile uploadUpload images
BOOLEANCheckboxTrue/false toggle
UIDText fieldValid ID
JSONText areaRaw JSON input

Editing Data Objects

Users with ROOT, ADMIN, or AUTHOR roles can edit existing data objects:

  1. From the data object view page, click "Edit Data"
  2. Modify the values for any structures
  3. Click "Update" to save your changes

Edit Data

Deleting Data Objects

Users with ROOT, ADMIN, or AUTHOR roles can delete data objects:

  1. From the data object view or edit page, click "Delete Data"
  2. Confirm the deletion in the modal that appears

⚠️ Warning: Deleting a data object permanently removes it from the system. This action cannot be undone.

User Permissions

Access to data objects is controlled by user roles:

RoleViewCreateEditDelete
ROOT
ADMIN
AUTHOR
VIEWER

Additionally, users can only access data within projects they are members of.

Working with Custom Structures

When a collection includes custom structures, data objects for that collection will include sections for each custom structure type:

  1. Each custom structure is displayed in its own card
  2. The card contains fields for all structures within the custom structure
  3. Values are entered and displayed just like regular structures

Custom structures allow for more complex, nested data models within your collections.

Filtering and Pagination

When viewing data objects in a collection:

  1. Use the filter box to search for objects by nickname or ID
  2. Use pagination controls to navigate through large collections
  3. Adjust the page size if needed

Best Practices

For effective data management:

  1. Use Descriptive Nicknames: Give data objects clear, meaningful nicknames to make them easier to identify
  2. Regular Backups: Back up your data regularly, especially before making major changes
  3. Consider Relationships: Design your data structure to reflect relationships between different objects

Routes

Routes in Kinesis API are the endpoints that expose your API functionality to the outside world. They define how your API responds to HTTP requests, what logic executes when an endpoint is called, and what data is returned to clients. Kinesis API provides a powerful visual route builder called the Flow Editor to create complex API routes without having to write traditional code.

Understanding Routes

Each route in Kinesis API has:

  • Route ID: A unique identifier for the route within the project
  • Path: The URL path through which the route is accessed
  • HTTP Method: The HTTP method the route responds to (GET, POST, PUT, PATCH, DELETE, etc.)
  • Authentication Settings: Optional JWT authentication requirements
  • Parameters: URL parameters the route can accept
  • Body: JSON body structure the route expects (for POST, PUT, PATCH)
  • Flow: The visual logic flow that defines the route's behavior

Routes belong to projects and are managed at the project level, allowing you to organize related functionality in a logical way.

Accessing Routes Management

To access the routes management interface:

  1. Log in to your Kinesis API account
  2. Navigate to /web/routes in your browser or click "Routes" in the main navigation menu
  3. You'll see a list of all projects you have access to
  4. Click on a project to view and manage its routes at /web/routes/project?id=project_id

Routes Page

Browsing Projects and Routes

Projects Level

At /web/routes, you'll see all projects you have access to:

  • Each project card displays the project's name, ID, description, and API path
  • You can filter projects using the search box
  • View the number of members in each project
  • Click the eye icon to view project members
  • Click on any project to navigate to its routes

Routes Level

At /web/routes/project?id=project_id, you'll see all routes within the selected project:

  • Each route card shows the route's ID and path
  • You can filter routes using the search box
  • View, edit, or delete routes using the action buttons
  • Create new routes using the "Create New" button (ADMIN and ROOT users only)

Routes Page

Route API Path Construction

The complete API path that clients will use to access your route is constructed from multiple components:

  • Base URL: The root URL of your Kinesis API instance (e.g., https://api.kinesis.world)
  • API Prefix: The global prefix for all routes in your instance (e.g., /api/v1)
  • Project Path: The API path segment for the project the route belongs to (e.g., /my_project)
  • Route Path: The path defined for the route (e.g., /users/fetch/all)

The final API path is a concatenation of these components, forming a URL like https://api.kinesis.world/api/v1/x/my_project/users/fetch/all (notice the /x/).

HTTP Methods

When creating a route, you must select which HTTP method it will respond to. Each method has a specific purpose:

MethodDescriptionBody SupportTypical Use
GETRetrieve data without modifying resourcesNoFetching data, searching, filtering
POSTCreate new resources or submit dataYesCreating new records, submitting forms
PUTReplace an entire resource with new dataYesComplete resource updates
PATCHApply partial updates to a resourceYesPartial resource updates
DELETERemove a resourceNo*Deleting records
OPTIONSDescribe communication options for the resourceNoCORS preflight requests, API discovery
HEADLike GET but returns only headers, no bodyNoChecking resource existence/metadata

*Note: While DELETE technically can have a body according to HTTP specifications, it's uncommon and not supported in all clients.

JWT Authentication

Routes can be configured to require JSON Web Token (JWT) authentication. To implement JWT authentication in your API:

  1. Create a collection to store user data with at least:

    • A unique identifier field (e.g., uid or username)
    • Password field (should be stored securely)
    • Any additional user information
  2. Create two essential routes:

    • Login/Token Creation: Validates credentials and issues a JWT
    • Token Validation: Verifies the JWT and extracts user information

JWT Authentication Flow

  1. User sends credentials to the login route
  2. Route validates credentials against the user collection
  3. If valid, a JWT containing the user's identifier is created and returned
  4. For subsequent requests to protected routes, the client includes this token
  5. Protected routes verify the token before processing the request

Note: For a detailed implementation guide, see the JWT Authentication Tutorial.

URL Parameters

URL parameters allow clients to pass data to your API through the URL. When configuring a route, you can define:

  1. Parameter Delimiter: The character that separates parameters in the URL

    • Typically ? for the first parameter and & for subsequent ones
    • Example: /users/search?name=John&age=30
  2. Parameter Definitions: For each parameter, you define:

    • Name: The parameter identifier
    • Type: Data type (String, Number, Boolean, etc.)

Kinesis API automatically validates incoming parameter values against these definitions, rejecting requests with invalid parameters.

Request Body

For routes using HTTP methods that support a request body (POST, PUT, PATCH), you can define the expected structure of that body. Define each field in the body with:

  • Name: The field identifier
  • Type: Data type expected (Integer, Float, String, Boolean, Array, Other)

Kinesis API validates incoming request bodies against this defined structure, ensuring data integrity before your route logic executes.

⚠️ Important Note: For comprehensive understanding of routes, authentication flows, and advanced API features, please refer to the Tutorials Section. These tutorials provide step-by-step guides and real-world examples that demonstrate how to effectively use Kinesis API's routing capabilities.

Flow Editor: Visual Route Builder

The Flow Editor is a powerful visual programming tool that allows you to create complex API logic without writing traditional code. When creating or editing a route, you'll use the Flow Editor to define the route's behavior:

  1. The left panel contains draggable blocks representing different operations
  2. Drag blocks onto the canvas to build your route's logic flow
  3. Connect blocks to define the execution path
  4. Configure each block's properties by clicking on it

Available blocks include:

Block TypePurpose
FETCHRetrieve data from collections
ASSIGNMENTAssign values to variables
TEMPLATECreate response templates
CONDITIONAdd conditional logic (if/else)
LOOPIterate over data sets
END_LOOPMark the end of a loop
FILTERFilter data based on conditions
PROPERTYAccess object properties
FUNCTIONExecute predefined functions
OBJECTCreate object literals
UPDATEUpdate existing data
CREATECreate new data
RETURNReturn a response and end execution

Creating Routes

Users with ROOT or ADMIN roles can create new routes:

  1. From a project's routes page, click the "Create New" button
  2. Fill in the route configuration details:
    • Route ID: Unique identifier for the route (e.g., fetch_users)
    • Route Path: The URL path (e.g., /users/fetch/all)
    • HTTP Method: Select GET, POST, PUT, PATCH, DELETE, etc.
  3. Configure optional JWT authentication requirements
  4. Define URL parameters and their delimiter (typically & or ;)
  5. Define the expected request body structure (for POST, PUT, PATCH)
  6. Build your route logic using the visual X Engine flow editor
  7. Click "Create" to save the route

Create Route

Viewing Routes

To view a route's details:

  1. From a project's routes page, click the view button (open link icon) on a route card
  2. You'll see the route's configuration details:
    • Basic information (ID, path, method)
    • Authentication settings
    • Parameter definitions
    • Body structure
    • Visual representation of the route's logic flow

From the route view page, you can:

  • Test the route using the Playground
  • Edit the route (if you have appropriate permissions)
  • Delete the route (if you have appropriate permissions)

View Route

Editing Routes

Users with ROOT or ADMIN roles can edit existing routes:

  1. From a project's routes page, click the edit button (pencil icon) on a route card
  2. Modify any aspect of the route:
    • Update the route path
    • Change the HTTP method
    • Modify authentication settings
    • Add or remove parameters
    • Change body structure
    • Redesign the logic flow using the X Engine
  3. Click "Update" to save your changes

Edit Route

Deleting Routes

Users with ROOT or ADMIN roles can delete routes:

  1. From a project's routes page, click the delete button (trash icon) on a route card
  2. Alternatively, click "Delete Route" on the route view or edit page
  3. Confirm the deletion in the modal that appears

⚠️ Warning: Deleting a route permanently removes it from the system. Any applications or services relying on this route will no longer be able to access it.

User Permissions

Access to routes management is controlled by user roles:

RoleView RoutesCreate RoutesEdit RoutesDelete Routes
ROOT
ADMIN
AUTHOR
VIEWER

Additionally, users can only access routes within projects they are members of.

Testing Routes

After creating or modifying a route, you should test it to ensure it behaves as expected:

  1. Use the Playground to send test requests to your route
  2. Verify that the response matches your expectations
  3. Test different parameter values and edge cases
  4. Check error handling by sending invalid requests

Route Security Best Practices

  1. Authentication: Use JWT authentication for routes that access sensitive data or perform modifications
  2. Input Validation: Validate all incoming data using appropriate blocks in your flow
  3. Error Handling: Add proper error handling to provide meaningful feedback on failures
  4. Rate Limiting: Consider implementing rate limiting for public-facing routes
  5. Minimal Exposure: Only expose the minimum data necessary in responses
  6. Testing: Thoroughly test routes before making them available to clients

Blocks

Blocks are the building components of Kinesis API's X Routing system. Each block represents a specific operation that can be connected together to form a complete route logic. This guide explains how to use each block type and its configuration options.

Overview

The X Routing system uses a visual flow-based approach where you connect different blocks to process requests, manipulate data, and return responses. Each block has specific inputs, outputs, and configuration options.

Common Block Properties

All blocks share some common properties:

  • Connections: Each block (except START) has an input connector and an output connector
  • Expansion: Click the header to expand/collapse a block's configuration panel
  • Deletion: Click the X button to delete a block (not available for START block)

Block Types

START Block

The START block is the entry point of every route flow. It is automatically created and cannot be deleted.

  • Purpose: Indicates where the request processing begins
  • Configuration: None required
  • Connections: Can only connect to one block via its output

FETCH Block

Fetch Block

The FETCH block retrieves data from a collection and stores it in a local variable.

  • Purpose: Retrieve data from collections
  • Configuration:
    • Local Name: Variable name to store the fetched data
    • Reference Collection: Name of the collection to fetch data from
  • Usage: Use FETCH to retrieve data that you'll need later in your route processing
  • Example: Fetching user data to validate permissions
{
  "local_name": "users",
  "ref_col": "users_collection"
}

ASSIGNMENT Block

Assignment Block

The ASSIGNMENT block assigns a value to a variable based on conditions and operations.

  • Purpose: Create or modify variables
  • Configuration:
    • Local Name: Variable name to store the result
    • Conditions: Optional conditions to evaluate
    • Operations: Operations to perform if conditions are met
  • Usage: Use for variable creation, data transformation, or conditional assignments
  • Example: Creating a status variable based on user role
{
  "local_name": "isAdmin",
  "conditions": [
    {
      "operands": [
        {
          "ref_var": true,
          "rtype": "STRING",
          "data": "role"
        },
        {
          "ref_var": false,
          "rtype": "STRING",
          "data": "admin"
        }
      ],
      "condition_type": "EQUAL_TO",
      "not": false,
      "next": "NONE"
    }
  ],
  "operations": [
    {
      "operands": [
        {
          "ref_var": false,
          "rtype": "BOOLEAN",
          "data": "true"
        }
      ],
      "operation_type": "NONE",
      "not": false,
      "next": "NONE"
    }
  ]
}

TEMPLATE Block

Template Block

The TEMPLATE block creates a string using a template with variable substitution.

  • Purpose: Create dynamic strings by combining static text and variable values
  • Configuration:
    • Local Name: Variable name to store the templated string
    • Template: The template string with placeholders
    • Data: Variables to use in the template
    • Conditions: Optional conditions for template processing
  • Usage: Use for creating dynamic messages, formatted content, or SQL queries
  • Example: Creating a personalized greeting
{
  "local_name": "greeting",
  "template": "Hello, {}! Welcome to {}.",
  "data": [
    {
      "ref_var": true,
      "rtype": "STRING",
      "data": "user.name"
    },
    {
      "ref_var": false,
      "rtype": "STRING",
      "data": "Kinesis API"
    }
  ],
  "conditions": []
}

CONDITION Block

Condition Block

The CONDITION block evaluates logical conditions and controls flow based on the result.

  • Purpose: Implement conditional logic and flow control
  • Configuration:
    • Conditions: Set of conditions to evaluate
    • Action: What to do if conditions are met (CONTINUE, BREAK, FAIL)
    • Fail Object: Status and message to return if action is FAIL
  • Usage: Use for validation, permission checks, or branching logic
  • Example: Validating user permissions
{
  "conditions": [
    {
      "operands": [
        {
          "ref_var": true,
          "rtype": "STRING",
          "data": "user.role"
        },
        {
          "ref_var": false,
          "rtype": "STRING",
          "data": "admin"
        }
      ],
      "condition_type": "NOT_EQUAL_TO",
      "not": false,
      "next": "NONE"
    }
  ],
  "action": "FAIL",
  "fail": {
    "status": 403,
    "message": "Unauthorized: Insufficient permissions"
  }
}

LOOP Block

Loop Block

The LOOP block iterates over a range of values or an array.

  • Purpose: Process multiple items or iterate a specific number of times
  • Configuration:
    • Local Name: Variable name for the current loop value
    • Start: Starting value for the loop
    • End: Ending value for the loop
    • Step: Optional increment value (defaults to 1)
    • Include Last: Whether to include the end value in the iteration
  • Usage: Use for batch processing, pagination, or iterative operations
  • Example: Processing a list of items
{
  "local_name": "index",
  "start": {
    "ref_var": false,
    "rtype": "INTEGER",
    "data": "0"
  },
  "end": {
    "ref_var": true,
    "rtype": "INTEGER",
    "data": "items.length"
  },
  "step": null,
  "include_last": false
}

END_LOOP Block

End Loop Block

The END_LOOP block marks the end of a loop section.

  • Purpose: Indicates where a loop section ends
  • Configuration:
    • Local Name: Must match the corresponding LOOP block's local name
  • Usage: Must be paired with a LOOP block
  • Example: Closing a loop that processes items
{
  "local_name": "index"
}

FILTER Block

Filter Block

The FILTER block filters items in an array based on specified criteria.

  • Purpose: Reduce an array to only items that match criteria
  • Configuration:
    • Local Name: Variable name to store the filtered results
    • Reference Variable: Variable containing the array to filter
    • Reference Property: Optional property to filter by
    • Filters: Filtering criteria
  • Usage: Use to find matching items or remove unwanted elements
  • Example: Filtering active users
{
  "local_name": "activeUsers",
  "ref_var": "users",
  "ref_property": "status",
  "filters": [
    {
      "operand": {
        "ref_var": false,
        "rtype": "STRING",
        "data": "active"
      },
      "operation_type": "EQUAL_TO",
      "not": false,
      "next": "NONE"
    }
  ]
}

PROPERTY Block

Property Block

The PROPERTY block accesses or manipulates properties of objects or arrays.

  • Purpose: Extract values from objects/arrays or perform operations on them
  • Configuration:
    • Local Name: Variable name to store the result
    • Data: The object or array to operate on
    • Apply: Operation to apply (GET_PROPERTY, LENGTH, GET_FIRST, GET_LAST, GET_INDEX)
    • Additional: Additional information for the operation (e.g., property name, index)
  • Usage: Use to extract specific data or compute properties of collections
  • Example: Getting an object's property
{
  "local_name": "userName",
  "property": {
    "data": {
      "ref_var": true,
      "rtype": "OTHER",
      "data": "user"
    },
    "apply": "GET_PROPERTY",
    "additional": "name"
  }
}

FUNCTION Block

Function Block

The FUNCTION block executes predefined functions.

  • Purpose: Perform common operations using built-in functions
  • Configuration:
    • Local Name: Variable name to store the function result
    • Function ID: The function to execute (e.g., V4, GENERATE_TIMESTAMP, GENERATE_JWT_TOKEN, PAGINATE)
    • Parameters: Function parameters
  • Usage: Use for utility operations like generating IDs or formatting dates
  • Example: Generating a UUID
{
  "local_name": "newId",
  "func": {
    "id": "V4",
    "params": []
  }
}

OBJECT Block

Object Block

The OBJECT block creates a new object with specified key-value pairs.

  • Purpose: Construct custom objects
  • Configuration:
    • Local Name: Variable name to store the created object
    • Pairs: Key-value pairs for the object
  • Usage: Use to create response objects or structured data
  • Example: Creating a user profile object
{
  "local_name": "profile",
  "pairs": [
    {
      "id": "name",
      "data": {
        "ref_var": true,
        "rtype": "STRING",
        "data": "user.name"
      }
    },
    {
      "id": "email",
      "data": {
        "ref_var": true,
        "rtype": "STRING",
        "data": "user.email"
      }
    }
  ]
}

UPDATE Block

Update Block

The UPDATE block modifies existing data in a collection based on specified criteria and rules.

  • Purpose: Perform targeted updates to collection data
  • Configuration:
    • Reference Collection: Collection containing the data to update
    • Reference Property: Property path used to identify or navigate the data structure (can be empty or in the form 'field' or 'custom_structure.field')
    • Add: Optional value to add (used for arrays or numeric additions)
    • Set: Optional value to set (used for direct replacement of values)
    • Filter: Used when the property is an array to filter which elements of the array should be updated
    • Targets: Specifies which data objects to update by doing checks against the fields within them
    • Save: Boolean indicating whether to persist changes immediately
    • Conditions: Optional conditions that must be met for the update to proceed
  • Usage: Use for complex data updates, especially when you need to update specific fields conditionally or work with arrays
  • Example: Updating specific fields of user records that match certain criteria
{
  "ref_col": "users",
  "ref_property": "profile.settings",
  "add": null,
  "set": {
    "ref_var": true,
    "rtype": "OTHER",
    "data": "newSettings"
  },
  "filter": {
    "operand": {
      "ref_var": false,
      "rtype": "STRING",
      "data": "darkMode"
    },
    "operation_type": "EQUAL_TO",
    "not": false,
    "next": "NONE"
  },
  "targets": [
    {
      "field": "userId",
      "conditions": [
        {
          "operands": [
            {
              "ref_var": false,
              "rtype": "STRING",
              "data": "123"
            }
          ],
          "condition_type": "EQUAL_TO",
          "not": false,
          "next": "NONE"
        }
      ]
    }
  ],
  "save": true,
  "conditions": []
}

How the UPDATE Block Works:

  1. Initial Conditions Check: The block first evaluates any conditions to determine if the update should proceed.

  2. Target Selection:

    • The block evaluates the targets to determine which data objects should be updated
    • Each target specifies a field and conditions for selecting data objects
    • For example, in the above JSON, it will select data objects where userId equals "123"
  3. Property Navigation:

    • The ref_property specifies which property path to update within the selected objects
    • If empty, the entire object is considered for update
  4. Filter Application:

    • If the property is an array, filter determines which array elements to update
    • For example, it might update only array elements where a certain property equals a certain value
  5. Value Modification:

    • set: Directly replaces the value of the specified property
    • add: For arrays, adds elements; for numbers, performs addition operation
  6. Persistence:

    • If save is true, changes are immediately persisted to the database
    • If false, changes remain in memory for further processing

SIMPLE UPDATE Block

Simple Update Block

The SIMPLE UPDATE block modifies existing data in a collection with a more streamlined configuration than the standard UPDATE block.

  • Purpose: Perform straightforward updates to collection data
  • Configuration:
    • Reference Collection: Collection containing the data to update
    • Reference Property: Property path used to identify or navigate the data structure (can be empty or in the form 'field' or 'custom_structure.field')
    • Set: Value to set (used for direct replacement of values)
    • Targets: Specifies which data objects to update by doing checks against the fields within them
    • Save: Boolean indicating whether to persist changes immediately
  • Usage: Use for simple updates where you don't need to add values, filter arrays, or use conditions
  • Example: Updating a user's email address
{
  "ref_col": "users",
  "ref_property": "email",
  "set": {
    "ref_var": false,
    "rtype": "STRING",
    "data": "new_email@example.com"
  },
  "targets": [
    {
      "field": "userId",
      "conditions": [
        {
          "operands": [
            {
              "ref_var": false,
              "rtype": "STRING",
              "data": "123"
            }
          ],
          "condition_type": "EQUAL_TO",
          "not": false,
          "next": "NONE"
        }
      ]
    }
  ],
  "save": true
}

How the SIMPLE UPDATE Block Works:

  1. Target Selection:

    • The block evaluates the targets to determine which data objects should be updated
    • Each target specifies a field and conditions for selecting data objects
  2. Property Navigation:

    • The ref_property specifies which property path to update within the selected objects
    • If empty, the entire object is considered for update
  3. Value Modification:

    • set: Directly replaces the value of the specified property
  4. Persistence:

    • If save is true, changes are immediately persisted to the database
    • If false, changes remain in memory for further processing

CREATE Block

Create Block

The CREATE block creates new data in a collection.

  • Purpose: Insert new records into collections
  • Configuration:
    • Reference Collection: Collection where data will be created
    • Reference Object: Object containing the data to create
    • Save: Whether to save immediately
    • Conditions: Optional conditions for creation
  • Usage: Use to insert new records based on request data
  • Example: Creating a new user
{
  "ref_col": "users",
  "ref_object": "newUser",
  "save": true,
  "conditions": []
}

RETURN Block

Return Block

The RETURN block ends processing and returns a response to the client.

  • Purpose: Generate and send the API response
  • Configuration:
    • Pairs: Key-value pairs for the response object
    • Conditions: Optional conditions for the response
  • Usage: Use to complete the request and send data back to the client
  • Example: Returning a success response with data
{
  "pairs": [
    {
      "id": "status",
      "data": {
        "ref_var": false,
        "rtype": "INTEGER",
        "data": "200"
      }
    },
    {
      "id": "data",
      "data": {
        "ref_var": true,
        "rtype": "OTHER",
        "data": "result"
      }
    }
  ],
  "conditions": []
}

Best Practices

  1. Start Simple: Begin with basic flows and add complexity as needed
  2. Use Descriptive Names: Choose clear variable names for readability
  3. Handle Errors: Include appropriate condition blocks to validate inputs and handle errors
  4. Test Thoroughly: Test your routes with various inputs to ensure they work as expected
  5. Document Your Logic: Add comments to your flows to explain complex logic
  • Routes - Managing routes in Kinesis API
  • Collections - Working with data collections
  • Data - Working with data in Kinesis API

Playground

The Playground is an interactive testing environment built into Kinesis API that allows you to explore, test, and debug your API endpoints. It provides a convenient way to experiment with your API routes without needing external tools.

Overview

The Playground consists of three main components:

  1. Main Playground Page: Lists projects and previously saved requests
  2. Routes Page: Shows all routes within a selected project
  3. Request Page: Allows you to test individual routes with custom parameters and bodies

All requests and responses are saved locally in your browser, allowing you to replay them later.

Note: The Playground is designed for testing and development purposes. While powerful for quick testing, it isn't intended to replace specialized API testing tools for comprehensive testing scenarios.

Main Playground Page

To access the Playground:

  1. Log in to your Kinesis API account
  2. Navigate to /web/playground in your browser or select "Playground" from the navigation menu

Playground Page

The main page displays:

Projects Section

This section shows all projects you are a member of. For each project, you'll see:

  • Project name
  • API path
  • Member count
  • View buttons for routes and members

Use the filter input to quickly find specific projects.

Replay Previous Requests Section

This section displays requests you've previously made using the Playground:

  • Request date/time
  • API endpoint URL
  • Options to view or delete the saved request

These requests are stored locally in your browser and are not visible to other users.

Project Routes Page

When you click on a project in the main Playground, you'll be taken to the routes page for that project:

/web/playground/routes?id=[project_id]

Playground Project Routes Page

This page displays:

  • Project information (name, ID, description, API path)
  • A list of all routes in the project
  • Each route shows its ID and API path
  • A button to test each route

Use the filter input to quickly find specific routes.

Request Testing Page

This is where you actually test API routes. Access this page by:

  • Clicking on a route in the Project Routes page
  • Clicking on a saved request in the Replay section of the main Playground
  • Creating a new request via the "New Request" button on the main Playground

URL pattern: /web/playground/request?project_id=[project_id]&id=[route_id]

Request Testing Page

The request testing interface includes:

HTTP Method and URL

The top section shows:

  • HTTP method selector (GET, POST, PUT, etc.)
  • URL input field showing the full API path
  • Send button to execute the request

Authorization

Add JWT tokens or Personal Access Tokens (PATs) for authenticated requests:

  • Input field for token value
  • Show/hide toggle for security
  • Automatically populates for routes requiring authentication

URL Parameters

For routes with URL parameters:

  • Add parameter button to include additional parameters
  • Key-value pairs for each parameter
  • Parameters are automatically added to the URL
  • Parameter changes update the URL field in real-time

Request Body

For POST, PUT, and PATCH requests:

  • JSON editor for the request body
  • Syntax highlighting and validation
  • Auto-populated with the expected structure based on route configuration

Response

After sending a request:

  • Response is displayed in a JSON editor
  • Syntax highlighting for easy reading
  • Status code and timing information

Saving and Replaying Requests

The Playground automatically saves your requests locally in your browser. When you send a request:

  1. The request method, URL, authentication token, parameters, and body are saved
  2. The response is also saved
  3. A unique identifier is generated for the request

To replay a saved request:

  1. Go to the main Playground page
  2. Find your request in the Replay section
  3. Click the view button to load the request
  4. Optionally modify any parameters
  5. Click "Send" to execute the request again

Security Considerations

  • Authentication tokens are stored locally in your browser
  • Tokens are never sent to other users or systems beyond the API itself
  • Consider using test accounts or temporary tokens when testing in shared environments
  • Clear your browser data regularly to remove sensitive information

Using Playground in Development Workflow

The Playground can be particularly helpful in these scenarios:

  1. Initial API Testing: Quickly verify a new route works as expected
  2. Debugging Issues: Isolate and test problematic API calls
  3. Exploring APIs: Learn how existing endpoints work and what they return
  4. Sharing Examples: Create sample requests that can be recreated by team members
  5. Iterative Development: Test changes to routes as you develop them

For more comprehensive examples of using the Playground in real-world scenarios, refer to our tutorials:

Events

The Events page in Kinesis API provides a comprehensive audit trail of all significant actions that occur within your system. This powerful monitoring tool helps administrators track changes, troubleshoot issues, and maintain accountability across the platform.

Access Control

Important: The Events page is only accessible to users with ROOT or ADMIN roles. Other users attempting to access this page will be redirected to the dashboard.

Accessing the Events Page

To access the Events page:

  1. Log in with a ROOT or ADMIN account
  2. Navigate to /web/events in your browser or use the navigation menu

Understanding Events

Events in Kinesis API represent significant actions that have occurred within the system. Each event captures:

  • Event Type: The category and specific action (e.g., user_create, data_update)
  • Timestamp: When the action occurred
  • Description: Details about what happened, often including references to specific users, projects, or collections
  • Redirect Link: A direct link to the relevant page in the system

Events serve as an audit trail, allowing administrators to track who made changes, what was changed, and when those changes occurred.

Events Interface

Events Page

The Events interface includes:

  • A filterable, paginated list of events
  • Events displayed in reverse chronological order (newest first)
  • Visual icons representing different event types
  • Links to related pages for further investigation

Types of Events Tracked

Kinesis API tracks events across all major system components:

User Events

  • Account creation and registration
  • Role changes
  • Account deletion
  • Password reset requests

Configuration Events

  • Configuration creation
  • Configuration updates
  • Configuration deletion

Project Events

  • Project creation
  • Project updates (name, description, API path)
  • Project deletion
  • Member addition/removal

Collection & Structure Events

  • Collection creation/deletion
  • Structure creation/deletion
  • Custom structure operations

Data Events

  • Data creation
  • Data updates
  • Data deletion
  • Publishing status changes

API Routes Events

  • Route creation
  • Route modifications
  • Route deletion

System Events

  • Constraint modifications
  • Personal Access Token management
  • Redirects management
  • Code snippet management

Filtering and Navigating Events

Event Filtering

To find specific events:

  1. Use the search box at the top of the events list
  2. Type any part of the event description, event type, or redirect path
  3. The list will automatically filter to show matching events

Pagination

For systems with many events:

  1. Navigate between pages using the pagination controls
  2. The page displays up to 42 events at a time

Understanding Event Information

Each event entry contains several key pieces of information:

  • Event Type Icon: Visual representation of the event category
  • Event Type: The specific action that occurred
  • Timestamp: When the action took place
  • Description: Details about the event, including:
    • User references (highlighted with username)
    • Project references (highlighted with project ID)
    • Collection references (highlighted with collection ID)
  • Navigation Link: Button to go directly to the relevant page

Event Retention

Events are stored permanently in the system database to maintain a complete audit history. The events page implements pagination to handle large numbers of events efficiently.

Common Use Cases

The Events page is particularly useful for:

  1. Security Monitoring: Track user creation, role changes, and password resets
  2. Troubleshooting: Identify when and how changes were made that might have caused issues
  3. User Activity Tracking: Monitor which users are making changes to the system
  4. Audit Compliance: Maintain records of all system changes for compliance requirements
  5. Change Management: Verify that planned changes were implemented correctly

Media

The Media management system in Kinesis API provides a centralized location to upload, manage, and utilize media files across your API and web interface. This page explains how to use the Media functionality to handle images.

Accessing Media Management

To access the Media management interface:

  1. Log in to your Kinesis API account
  2. Navigate to /web/media in your browser or click "Media" in the navigation menu

Media Page

Media Interface Overview

The Media management interface includes:

  • A searchable list of all media files in the system
  • Pagination controls for navigating through large media collections
  • Tools for uploading new media
  • Preview functionality for existing media
  • Copy link buttons for easy sharing
  • Delete options for administrators

Uploading Media

All authenticated users can upload media files:

  1. Click the "Upload Media" button at the top of the page
  2. A modal will appear prompting you to select a file
  3. Choose a file from your device
  4. The file will be uploaded and added to your media library

Supported File Types

Kinesis API supports various image file types including:

  • JPG
  • PNG
  • GIF
  • WebP
  • SVG

The maximum file size is determined by your system configuration (default: 2MB).

Viewing Media

The main page displays a list of all media files with:

  • A thumbnail preview
  • The media ID
  • The filename
  • Action buttons

Previewing Media

To preview a media file:

  1. Click on the file thumbnail or name
  2. A modal will open showing a larger preview of the image
  3. Click outside the modal or the X button to close it

Managing Media Files

Filtering Media

To find specific media files:

  1. Use the filter box at the top of the media list
  2. Type any part of the filename or ID
  3. The list will automatically filter to show matching files

Pagination

For systems with many media files:

  1. Navigate between pages using the pagination controls
  2. The page displays up to 10 media files at a time

Deleting Media

Users with ROOT or ADMIN roles can delete media files:

  1. Click the delete button (trash icon) next to the media file
  2. A confirmation modal will appear showing a preview of the file
  3. Confirm the deletion

⚠️ Warning: Deleting a media file is permanent and will remove it from all places where it's being used. Ensure the file is no longer needed before deletion.

Access Control

Media management follows these permission rules:

RoleView MediaUpload MediaDelete Media
ROOT
ADMIN
AUTHOR
VIEWER

Public Access to Media

Media files uploaded to Kinesis API are publicly accessible via their direct URLs. This allows you to:

  • Use media in public-facing API responses
  • Embed images in web pages
  • Link to downloadable files

Keep this in mind when uploading sensitive content—if a file shouldn't be publicly accessible, consider encrypting it or storing it elsewhere.

Best Practices

  1. Use Descriptive Filenames: Clear filenames make media easier to find and manage
  2. Optimize Before Upload: Compress images and optimize files before uploading to save space
  3. Regular Cleanup: Periodically remove unused media to keep your library organized
  4. Secure Sensitive Content: Remember that uploaded media is publicly accessible
  5. Backup Important Files: Keep backups of critical media files outside the system

REPL

The REPL (Read-Eval-Print Loop) is a powerful interface that allows ROOT users to interact directly with the underlying database system in Kinesis API. This advanced feature provides a command-line style interface for executing database operations, testing queries, and managing data structures without leaving the web interface.

Access Control

⚠️ Important: The REPL interface is only accessible to users with the ROOT role. This restriction is in place because the REPL provides direct access to the database, bypassing the standard API permissions and validations.

Accessing the REPL

To access the REPL interface:

  1. Log in with a ROOT user account
  2. Navigate to /web/repl in your browser or select "REPL" from the navigation menu

REPL Interface

REPL Interface

The REPL interface consists of:

  1. Command Input Area: A textarea where you can enter database commands
  2. Output Format Selector: Choose between Table, JSON, and Standard output formats
  3. Execute Button: Run the entered command
  4. Output Display: Shows the results of executed commands

Available Commands

The REPL supports a comprehensive set of commands for interacting with the database:

Table Management

CommandDescriptionExample
CREATE_TABLECreate a new table with optional schemaCREATE_TABLE users name STRING --required age INTEGER
DROP_TABLEDelete a tableDROP_TABLE users
GET_TABLEShow table schemaGET_TABLE users
GET_TABLESList all tablesGET_TABLES
UPDATE_SCHEMAUpdate table schemaUPDATE_SCHEMA users --version=2 active BOOLEAN

Record Management

CommandDescriptionExample
INSERTInsert a new recordINSERT INTO users ID 1 SET name = "John" age = 30
UPDATEUpdate an existing recordUPDATE users ID 1 SET age = 31
DELETEDelete a recordDELETE FROM users 1
GET_RECORDRetrieve a single recordGET_RECORD FROM users 1
GET_RECORDSRetrieve all records from a tableGET_RECORDS FROM users
SEARCH_RECORDSSearch for recordsSEARCH_RECORDS FROM users MATCH "John"

Help

CommandDescriptionExample
HELPShow general helpHELP
HELP [command]Show help for a specific commandHELP CREATE_TABLE

Output Formats

The REPL supports three output formats:

  1. Table: Formats results as ASCII tables for easy reading
  2. JSON: Returns results in JSON format for programmatic analysis
  3. Standard: Simple text output with minimal formatting

Using the REPL

Basic Usage

  1. Enter a command in the input area
  2. Select your preferred output format
  3. Click "Execute" to run the command
  4. View the results in the output display

Schema Definition

When creating or updating tables, you can define schema fields with various constraints:

CREATE_TABLE users
  name STRING --required --min=2 --max=50
  email STRING --required --unique --pattern="^[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}$"
  age INTEGER --min=0 --max=120

Available field types:

  • STRING: Text data
  • INTEGER: Whole numbers
  • FLOAT: Decimal numbers
  • BOOLEAN: True/false values

Available field constraints:

  • --required: Field must have a value
  • --unique: Field values must be unique within the table
  • --min=<value>: Minimum value (for numbers) or length (for strings)
  • --max=<value>: Maximum value (for numbers) or length (for strings)
  • --pattern=<regex>: Regular expression pattern for string validation
  • --default=<value>: Default value if none is provided

Security Considerations

The REPL provides direct access to the database, which comes with significant power and responsibility:

  1. Limited Access: Only ROOT users can access the REPL
  2. Audit Trail: All REPL commands are logged in the system events
  3. No Undo: Most operations cannot be undone, especially schema changes and deletions
  4. Performance Impact: Complex queries on large tables may impact system performance

Best Practices

  1. Test in Development: Use the REPL in development environments before running commands in production
  2. Back Up First: Create backups before making significant schema changes
  3. Use Transactions: For complex operations, consider using transactions to ensure data integrity
  4. Document Changes: Keep records of schema changes made through the REPL
  5. Prefer API: For routine operations, use the standard API endpoints rather than direct REPL access

Example Use Cases

The REPL is particularly useful for:

  1. Prototyping: Quickly create and test database structures
  2. Data Cleanup: Fix or remove problematic records
  3. Schema Evolution: Add or modify fields in existing tables
  4. Troubleshooting: Inspect database contents for debugging
  5. Data Migration: Bulk operations during system updates
  • Kinesis DB - Learn about the underlying database system

Extended Components

While Kinesis API's core components provide the essential functionality needed for API development and management, the platform also offers a set of extended components that enhance your experience and provide additional capabilities. These components aren't required for basic operation but can significantly improve your workflow and expand what's possible with the platform.

What Are Extended Components?

Extended components are supplementary features that:

  • Add convenience and efficiency to your workflow
  • Enable specialized functionality for specific use cases
  • Provide quality-of-life improvements
  • Extend the platform's capabilities beyond core API management

These components can be used as needed, depending on your specific requirements and preferences.

Available Extended Components

Redirects

The redirects system allows you to create and manage URL redirects within your Kinesis API instance:

  • Define source and destination URLs
  • Set up permanent (301) redirects
  • Create vanity URLs for easier sharing
  • Redirect legacy endpoints to new API paths

Redirects are particularly useful when restructuring your API or creating shortened URLs for documentation and sharing.

Snippets

Code snippets provide a way to store and reuse common code patterns:

  • Create a library of reusable code fragments
  • Share implementation examples across your team
  • Include language-specific samples for API consumers
  • Document common usage patterns

Snippets improve consistency and save time when implementing similar functionality across different parts of your API.

Tickets

The tickets system provides a comprehensive issue tracking solution:

  • Report bugs and technical issues
  • Request new features or improvements
  • Ask questions about functionality
  • Track tasks and assignments
  • Collect user feedback

Tickets facilitate communication between users and developers, helping maintain quality and responsiveness in your API ecosystem.

Misc

The Misc section contains miscellaneous utilities that don't fit neatly into other categories:

  • Global Navigate: A system-wide navigation tool for quickly accessing different parts of the platform
  • Utility functions and helpers for common tasks
  • System status indicators and tools

These miscellaneous components provide convenience features that enhance your overall experience with Kinesis API.

When to Use Extended Components

Consider using extended components when:

  • You need to maintain backward compatibility while evolving your API (Redirects)
  • Your team is implementing similar patterns across multiple routes (Snippets)
  • You want to track issues, improvements, and user feedback (Tickets)
  • You want to optimize your workflow and navigation (Global Navigate)
  • You're looking for ways to enhance user experience for API consumers

While not essential for core functionality, these components can significantly improve productivity and user experience when used appropriately.

Getting Started with Extended Components

To begin using extended components:

  1. Familiarize yourself with the core components first
  2. Identify areas where extended components could improve your workflow
  3. Explore the specific documentation for each extended component
  4. Start with simple implementations and expand as needed

Extended components are designed to be approachable and incrementally adoptable, allowing you to incorporate them into your workflow at your own pace.

Best Practices

When working with extended components:

  • Use redirects sparingly and monitor their performance impact
  • Maintain a well-organized snippet library with clear naming conventions
  • Document how and where extended components are being used in your API
  • Regularly review and update your extended component configurations

Following these practices will help you get the most benefit from Kinesis API's extended capabilities.

  • Core Components - Essential building blocks of Kinesis API
  • Redirects - URL redirection management
  • Snippets - Code fragment library
  • Tickets - Issue tracking and feedback system
  • Misc - Miscellaneous utilities and tools

Tickets

The Tickets system in Kinesis API provides a robust issue tracking solution that allows users to report bugs, request features, ask questions, and provide feedback. It's designed to facilitate collaboration between users and developers while maintaining a transparent record of reported issues and their resolutions.

Overview

The ticketing system serves as a central hub for:

  • Bug reporting and tracking
  • Feature requests and suggestions
  • User questions and support inquiries
  • Task management and assignment
  • General feedback collection

All tickets are categorized, prioritized, and tracked through their lifecycle, providing clear visibility into the status of each issue.

Key Features

Ticket Management

  • Create Tickets: Anyone can create tickets with a title, description, type, and priority
  • Ticket Types: Categorize as BUG, FEATURE, IMPROVEMENT, QUESTION, TASK, or FEEDBACK
  • Priority Levels: Assign LOW, MEDIUM, HIGH, CRITICAL, or URGENT priority
  • Status Tracking: Monitor progress through OPEN, ACTIVE, RESOLVED, CLOSED, or WONTDO statuses
  • Tags: Add custom tags for better organization and searchability
  • Public/Private: Control visibility of tickets to other users

User Interactions

  • Comments: Add discussions, updates, and additional information to tickets
  • Assignments: Assign tickets to specific users for resolution
  • Subscriptions: Subscribe to tickets to receive updates
  • Anonymous Submissions: Create tickets without requiring user registration

Organization and Discovery

  • Filtering: Filter tickets by type, status, and content
  • Sorting: Sort by title, priority, creation date, or last update
  • Pagination: Navigate through large numbers of tickets efficiently
  • Search: Find tickets by keywords in title, description, or tags

User Permissions

Different user roles have different capabilities within the ticketing system:

All Users (Including Anonymous)

  • Create new tickets
  • View public tickets
  • Comment on public tickets
  • Subscribe to ticket updates (with email)

Ticket Owners

  • Update title, description, and ticket type of their own tickets
  • Modify tags on their own tickets
  • Toggle public/private status of their own tickets
  • Delete their own tickets

Ticket Assignees

  • Update ticket status
  • Update ticket priority
  • Modify tags on assigned tickets

Administrators (ROOT/ADMIN)

  • Full control over all tickets
  • Assign/unassign users to tickets
  • Archive tickets
  • Delete any ticket or comment
  • Access both public and private tickets

Using the Tickets List Page

Tickets List Page

The tickets list page provides a comprehensive view of all accessible tickets with powerful organization tools:

Filtering and Sorting

  • Use the search bar to filter tickets by content
  • Filter by ticket type (BUG, FEATURE, etc.)
  • Filter by status (OPEN, ACTIVE, etc.)
  • Sort by various criteria:
    • Title (ascending/descending)
    • Priority (highest/lowest)
    • Creation date (newest/oldest)
    • Last update

Creating Tickets

  1. Click the "Create a new Ticket" button
  2. Enter a descriptive title
  3. Provide a detailed description using the Markdown editor
  4. Select the appropriate ticket type and priority
  5. Add relevant tags (optional)
  6. Choose whether the ticket should be public
  7. Submit your contact information (if not logged in)
  8. Click "Create"

Using the Individual Ticket Page

Tickets Page

Each ticket has its own dedicated page that shows all details and allows for interactions:

Viewing Ticket Information

  • Ticket ID and title
  • Current status, priority, and type
  • Creation and last update timestamps
  • Full description
  • Tags and visibility status

Interacting with Tickets

  • Adding Comments: Use the "Post a new Comment" button to add information or ask questions
  • Updating Fields: Authorized users can edit various fields through the corresponding buttons
  • Managing Assignees: View and modify who is assigned to resolve the ticket
  • Subscribing: Get notified of updates by subscribing to the ticket

Comment Management

  • Add formatted comments with the Markdown editor
  • Edit your own comments if needed
  • View all discussions chronologically

Best Practices

For effective use of the ticketing system:

  • Be Specific: Provide clear titles and detailed descriptions
  • Use Appropriate Types: Correctly categorize your ticket (BUG, FEATURE, etc.)
  • Set Realistic Priorities: Reserve HIGH/CRITICAL/URGENT for genuinely urgent issues
  • Check for Duplicates: Before creating a new ticket, search for similar existing ones
  • Stay Engaged: Respond to questions and provide additional information when requested
  • Update Status: Keep ticket status current to reflect actual progress

Blog

The Blog system in Kinesis API provides a powerful publishing platform that allows users to create, manage, and share content. It's designed to facilitate content creation with rich formatting while supporting interactive features like comments and likes.

Overview

The blog system serves as a content hub for:

  • Publishing articles and announcements
  • Sharing technical documentation
  • Engaging users through comments
  • Organizing content with tags and categories

All blog posts can be formatted with Markdown, allowing for rich content including code snippets, images, links, and formatting.

Key Features

Blog Post Management

  • Create Posts: Authors can create blog posts with titles, content, and metadata
  • Rich Formatting: Full Markdown support with preview
  • Drafts: Save posts as unpublished drafts before making them public
  • Image Support: Add preview images and carousel galleries
  • Public/Private: Control visibility of posts to other users
  • Tags: Add custom tags for better organization and searchability
  • Slugs: Custom URL-friendly identifiers for SEO

User Interactions

  • Comments: Add discussions and feedback to blog posts
  • Likes: Express appreciation for posts and comments
  • View Tracking: Monitor post popularity through view counts
  • Anonymous Comments: Allow feedback without requiring user registration

Organization and Discovery

  • Filtering: Filter posts by public/private status, publication status, and content
  • Sorting: Sort by title, creation date, or last update
  • Pagination: Navigate through large numbers of posts efficiently
  • Search: Find posts by keywords in title, subtitle, content, or tags

User Permissions

Different user roles have different capabilities within the blog system:

All Users (Including Anonymous)

  • View public blog posts
  • Comment on public blog posts (anonymously if not logged in)
  • Like posts and comments (if logged in)

Authors

  • Create new blog posts
  • Edit and delete their own posts
  • Toggle public/private status of their own posts
  • Publish/unpublish their own posts
  • Edit and delete their own comments

Administrators (ROOT/ADMIN)

  • Full control over all blog posts
  • Edit and delete any post or comment
  • Access both public and private posts
  • Publish/unpublish any post

Using the Blog List Page

Blog Page

The blog list page provides a comprehensive view of all accessible blog posts with powerful organization tools:

Filtering and Sorting

  • Use the search bar to filter posts by content
  • Filter by post status (Public/Private, Published/Unpublished)
  • Sort by various criteria:
    • Title (ascending/descending)
    • Creation date (newest/oldest)
    • Last update

Creating Blog Posts

  1. Click the "Create a new blog post" button
  2. Enter a title (a slug will be automatically generated)
  3. Provide a subtitle that summarizes the post
  4. Write your content using the Markdown editor
  5. Add a preview image (optional)
  6. Add carousel images (optional)
  7. Add relevant tags (comma-separated)
  8. Set visibility (public/private)
  9. Set publication status (published/unpublished)
  10. Click "Create"

Using the Individual Blog Post Page

Blog View Page

Each blog post has its own dedicated page that shows all details and allows for interactions:

Viewing Blog Post Information

  • Post title and subtitle
  • Author information and publication date
  • Full content with Markdown rendering
  • Preview image and carousel gallery (if added)
  • Tags and visibility status
  • View count and like count

Interacting with Blog Posts

  • Liking: Click the like button to show appreciation for the post
  • Sharing: Copy the post URL to share with others
  • Editing: Authors and admins can edit the post
  • Deleting: Authors and admins can delete the post

Comment Management

  • Add formatted comments with the Markdown editor
  • Edit your own comments if needed
  • Like comments to show appreciation
  • Delete your own comments or, as an admin, any comment

Reading Progress

A progress bar at the top of the post indicates how far you've read, helping you keep track of your position in longer articles.

Editing Blog Posts

Blog Edit Page

Authors and administrators can edit existing blog posts:

  1. From the blog post view page, click "Edit"
  2. Modify any field including title, content, images, etc.
  3. Click "Update Blog Post" to save your changes

Managing Comments

The comment system allows for rich interaction with blog posts:

Adding Comments

  1. Scroll to the comments section at the bottom of a blog post
  2. Click "Add a new Comment"
  3. Write your comment using the Markdown editor
  4. If not logged in, provide your name and email
  5. Submit your comment

Editing and Deleting Comments

  1. Locate your comment in the comments section
  2. Use the edit (pencil) or delete (trash) icons
  3. For editing, modify your comment and click "Submit"
  4. For deletion, confirm your choice in the confirmation dialog

Best Practices

For effective use of the blog system:

  • Use Clear Titles: Create descriptive, compelling titles
  • Add Subtitles: Provide a brief summary that entices readers
  • Format Content Well: Use Markdown to organize content with headings, lists, and emphasis
  • Include Images: Add visual interest with relevant images
  • Use Tags Consistently: Develop a tagging system for better organization
  • Draft First: Use the unpublished status to work on posts before making them public
  • Moderate Comments: Keep discussions constructive and on-topic

Content Guidelines

When creating blog posts:

  • Be Original: Avoid plagiarism and duplicate content
  • Add Value: Focus on providing useful, informative content
  • Stay Relevant: Keep content on-topic and aligned with your audience's interests
  • Use Clear Language: Write in a clear, concise style
  • Include Links: Reference related content and sources
  • Proofread: Check spelling, grammar, and formatting before publishing

Redirects

The Redirects system in Kinesis API provides a way to create and manage URL redirects within your application. This feature is useful for creating shortened URLs, handling legacy endpoints, or creating memorable links to complex resources.

Understanding Redirects

Each redirect in Kinesis API consists of:

  • ID: A unique identifier for the redirect
  • Locator: A short, unique code that forms part of the redirect URL
  • URL: The destination where users will be redirected

When someone accesses a redirect URL (e.g., https://your-api.com/go/abc123), they are automatically forwarded to the destination URL associated with that locator.

Accessing Redirects Management

To access the Redirects management interface:

  1. Log in to your Kinesis API account
  2. Navigate to /web/redirects in your browser or select "Redirects" from the navigation menu

Redirects Page

Redirects Interface

The Redirects management interface includes:

  • A searchable list of all redirects in the system
  • Pagination controls for navigating through large redirect collections
  • Actions for creating, updating, and deleting redirects
  • Tools for copying redirect URLs for sharing

Creating a Redirect

To create a new redirect:

  1. Click the "Create a New Redirect" button at the top of the page
  2. Enter the destination URL in the field provided
  3. (Optional) Specify a custom locator in the "Custom Locator" field
    • If left empty, the system will automatically generate a unique locator
    • Custom locators must be unique across the system
  4. Click "Create" to generate the redirect

Create Redirect Modal

The system will use your custom locator if provided, or generate a unique locator code if none is specified. This locator forms part of the shortened URL, ensuring uniqueness and consistency across all redirects.

Using Redirects

After creating a redirect, you can use it by:

  1. Copying the redirect URL by clicking the clipboard icon next to the redirect
  2. Sharing this URL with others or using it in your applications

The redirect URL will be in the format:

https://your-api-domain.com/go/locator-code

When users access this URL, they will be automatically redirected to the destination URL you specified.

Managing Redirects

Filtering Redirects

To find specific redirects:

  1. Use the search box at the top of the redirects list
  2. Type any part of the locator code or destination URL
  3. The list will automatically filter to show matching redirects

Pagination

For systems with many redirects:

  1. Navigate between pages using the pagination controls
  2. The page displays up to 15 redirects at a time

Updating Redirects

To change a redirect's destination URL:

  1. Find the redirect in the list
  2. Click the edit button (pencil icon)
  3. Enter the new destination URL in the modal that appears
  4. Click "Submit" to save the changes

Update Redirect Modal

Note that only the destination URL can be updated. The locator code is set when the redirect is created and cannot be modified at any point. If you need a different URL pattern, you'll need to create a new redirect and delete the old one.

Deleting Redirects

To remove a redirect:

  1. Find the redirect in the list
  2. Click the delete button (trash icon)
  3. Confirm the deletion in the modal that appears

Delete Redirect Modal

Once deleted, the redirect URL will no longer work, and users attempting to access it will receive an error.

Common Use Cases

Redirects in Kinesis API can be used for:

URL Shortening

Create more manageable, shorter URLs for sharing complex links:

https://your-api.com/go/product1 → https://your-api.com/products/catalog/electronics/smartphones/model-x-256gb-black

API Version Management

Provide stable URLs that can be updated when API endpoints change:

https://your-api.com/go/user-api → https://your-api.com/api/v2/users

Later, when you update to v3, you can simply update the redirect without changing the URL clients use.

Marketing Campaigns

Create memorable URLs for marketing campaigns:

https://your-api.com/go/summer-sale → https://your-api.com/shop/promotions/summer-2023?discount=25&campaign=email

Temporary Resources

Link to resources that might change location:

https://your-api.com/go/docs → https://docs.google.com/document/d/1abc123def456

Best Practices

For effective redirect management:

  1. Monitor Usage: Periodically review your redirects to identify and remove unused ones
  2. Avoid Redirect Chains: Try not to redirect to URLs that themselves redirect to other locations
  3. Security Awareness: Be cautious about redirecting to external sites that could pose security risks
  4. Regular Cleanup: Delete redirects that are no longer needed to keep your system organized

Snippets

Snippets in Kinesis API provide a powerful way to store, share, and reuse code fragments, documentation, and other text-based content. They serve as a central repository for commonly used patterns, examples, and templates that can be easily referenced across your projects.

Understanding Snippets

Each snippet in Kinesis API has:

  • Name: A descriptive title for the snippet
  • Description: A brief explanation of the snippet's purpose
  • Content: The actual text content, which supports Markdown formatting
  • Visibility Setting: Public or private access control
  • Optional Expiry Date: A time when the snippet will automatically expire

Snippets support Markdown with additional features like syntax highlighting for code blocks, diagrams via Mermaid.js, and emoji support, making them versatile for various documentation and code sharing needs.

Accessing Snippets Management

To access the Snippets management interface:

  1. Log in to your Kinesis API account
  2. Navigate to /web/snippets in your browser or select "Snippets" from the navigation menu

Snippets Page

Snippets Interface

The Snippets management interface includes:

  • A searchable list of all your snippets
  • Pagination controls for navigating through large snippet collections
  • Actions for creating, viewing, editing, and deleting snippets
  • Tools for copying snippet links for sharing

Creating a Snippet

To create a new snippet:

  1. Click the "Create a New Snippet" button at the top of the Snippets page

  2. Fill in the required information:

    • Name: A title for your snippet
    • Description: A brief explanation of the snippet's purpose
    • Content: The main text of your snippet, with support for Markdown
    • Expiry (optional): A date when the snippet should expire
    • Visibility: Toggle between public and private
  3. Click "Create" to save your snippet

Create Snippet Page

Markdown Support

When creating or editing snippets, you can use Markdown formatting:

  • Basic formatting: Headings, lists, links, bold, italic, etc.
  • Code blocks: Syntax highlighting for various programming languages
  • Diagrams: Create flowcharts and diagrams using Mermaid.js syntax
  • Tables: Organize data in tabular format
  • Emoji: Add emoji using standard Markdown emoji codes

Viewing Snippets

To view a snippet:

  1. Click on a snippet name or use the "View Snippet" button (open link icon) from the list
  2. The snippet content will be displayed with all formatting applied
  3. Additional details like ID, description, visibility, and expiry date are shown

View Snippet Page

Managing Snippets

Filtering Snippets

To find specific snippets:

  1. Use the search box at the top of the snippets list
  2. Type any part of the snippet name, ID, locator, or description
  3. The list will automatically filter to show matching snippets

Editing Snippets

To edit an existing snippet:

  1. Click the edit button (pencil icon) next to the snippet in the list or on the view page
  2. Modify any of the snippet details
  3. Click "Update Snippet" to save your changes

Edit Snippet Page

Deleting Snippets

To remove a snippet:

  1. Click the delete button (trash icon) next to the snippet
  2. Confirm the deletion in the modal that appears
  3. The snippet will be permanently removed

Sharing Snippets

Snippets can be easily shared:

  1. For any snippet, click the "Copy link to Snippet" button (clipboard icon)
  2. The URL will be copied to your clipboard
  3. Share this URL with others

The URL will be in the format:

https://your-api-domain.com/go/[snippet-locator]

Visibility Controls

Snippets have two visibility settings:

  • Private: Only visible to authenticated users of your Kinesis API instance
  • Public: Accessible to anyone with the link, even without authentication

Choose the appropriate visibility based on the sensitivity of the content and your sharing needs.

Snippet Expiration

When creating or editing a snippet, you can set an optional expiry date:

  1. Select a date and time in the Expiry field
  2. After this time, the snippet will no longer be accessible
  3. Expired snippets are automatically removed from the system

This feature is useful for temporary content that shouldn't persist indefinitely.

Best Practices

For effective snippet management:

  1. Descriptive Names: Use clear, searchable names that indicate the snippet's purpose
  2. Complete Descriptions: Add context in the description to help others understand when and how to use the snippet
  3. Proper Formatting: Use Markdown features to make content more readable and organized
  4. Set Appropriate Visibility: Make snippets public only when the content is suitable for wider access
  5. Use Expiration Dates: For temporary information, set an expiry date to keep your snippet library clean
  6. Organize by Purpose: Create separate snippets for different purposes rather than combining unrelated content

Common Use Cases

Snippets are particularly useful for:

Code Reuse

Store frequently used code patterns for easy reference:

  • API call templates
  • Common functions or utilities
  • Configuration examples

Documentation

Create and share documentation fragments:

  • Setup instructions
  • Troubleshooting guides
  • API usage examples

Knowledge Sharing

Share knowledge across your team:

  • Best practices
  • Architecture decisions
  • Design patterns

Quick Reference

Build a personal reference library:

  • Command line examples
  • Frequent queries
  • Common workflows

Search Engine

The search engine in Kinesis API provides powerful and flexible data filtering capabilities through an intuitive query language. It allows you to search through records in any table with sophisticated matching operations and complex filter conditions.

Query Language

The search engine implements a custom query language that supports:

  • String operations: contains, equals, startsWith, endsWith, fuzzy matching
  • Numeric operations: greater than, less than, equals, between ranges
  • Boolean operations: equals, not equals
  • Logical operators: AND, OR, NOT
  • Grouping with parentheses for complex expressions

Using the Search API

REST API Endpoints

There are two main endpoints for using the search feature:

1. Search with structured filter (POST)

POST /search/
Content-Type: application/json
Authorization: Bearer <token>

{
  "uid": 0,
  "search": {
    "table_name": "users",
    "filter": {
      "String": {
        "field": "name",
        "matcher": {"Contains": ["John", false]}
      }
    },
    "top_n": 10
  }
}

2. Search with query string (GET)

GET /search/?uid=0&table_name=users&query=name contains "John"&top_n=10
Authorization: Bearer <token>

The query string approach provides a more human-readable and concise way to create search filters.

Query Syntax Examples

String Operations

name contains "John"             // Case-insensitive substring match
name equals "John Doe", true     // Case-sensitive exact match
name startswith "J"              // Prefix matching
name endswith "son"              // Suffix matching
name fuzzy "Jon", 2, true        // Fuzzy matching with edit distance 2 and substring enabled

Numeric Operations

age > 30                         // Greater than
age < 20                         // Less than
age = 25                         // Equals
age between 20 and 30            // Range (inclusive)

Boolean Operations

active equals true               // Boolean equality
active equals false              // Boolean equality

Logical Operators

name contains "John" AND age > 30                  // Logical AND
name contains "John" OR name contains "Jane"       // Logical OR
NOT name contains "Admin"                          // Logical NOT

Complex Expressions

name contains "John" AND (age > 30 OR role equals "admin")    // Nested conditions
(name contains "John" OR name contains "Jane") AND age > 30    // Multiple groupings

How It Works

The search engine processes queries through three main steps:

  1. Parsing: Converts string queries into a structured filter representation
  2. Evaluation: Applies the filter to records, calculating match scores
  3. Ranking: Orders results by relevance score and returns the top N results

The engine uses indexes where available for better performance, falling back to full table scans when necessary.

Performance Considerations

  • Use indexed fields in your queries when possible
  • Prefer exact matches over fuzzy searches for better performance
  • Limit results with the top_n parameter for large tables
  • Complex queries with multiple conditions may take longer to process

Localization

The Kinesis API includes a powerful localization system that makes it easy to translate your application into multiple languages. It supports features like variable interpolation, pluralization, and fallback locales.

Features

  • Multiple Locale Support: Manage translations for any number of languages
  • Fallback Chain: Automatically fallback to parent locales (e.g., "en" for "en-US")
  • Variable Interpolation: Insert dynamic values into translations
  • Pluralization Rules: Language-specific plural forms for accurate translations
  • Hot Reloading: Automatically detect and reload modified translation files
  • REST API: Manage translations through a RESTful API

Translation File Format

Translations are stored in JSON files, one per locale:

{
  "common": {
    "welcome": "Welcome to Kinesis API",
    "error": "An error occurred",
    "buttons": {
      "save": "Save",
      "cancel": "Cancel"
    }
  },
  "auth": {
    "login": "Log in",
    "logout": "Log out",
    "register": "Sign up"
  },
  "notifications": {
    "message_count": {
      "one": "You have {count} new message",
      "other": "You have {count} new messages"
    }
  }
}

These are accessed using dot notation (e.g., common.welcome, auth.login, notifications.message_count).

Managing Translations via Web UI

Kinesis API provides a user-friendly web interface for managing translations without needing to work directly with JSON files or API calls.

Accessing the Translation Manager

  1. Log in to your Kinesis API instance
  2. Navigate to /web/locales in your browser
  3. You'll see a list of all available locales and their translations

Locales Management Page

Working with Locales

The top of the page displays available locales as buttons. Click any locale to view and manage its translations. The active locale is highlighted.

Creating Translations

  1. Click the "Create a new translation" button at the top of the page
  2. Enter the locale code (e.g., "en", "fr"), or use the pre-filled active locale
  3. Enter the key (using dot notation, e.g., "common.buttons.save")
  4. Enter the translation value
  5. Click "Create" to add the translation

If you enter a locale that doesn't exist yet, it will be created automatically.

Filtering Translations

Use the filter box to quickly find translations by key or value. This is especially useful for locales with many translations.

Updating Translations

  1. Find the translation you want to modify
  2. Click the edit (pencil) icon
  3. Enter the new translation value
  4. Click "Submit" to save your changes

Deleting Translations

  1. Find the translation you want to remove
  2. Click the delete (trash) icon
  3. Confirm the deletion when prompted

Permissions

Only users with ADMIN or ROOT roles can create, update, or delete translations. All users can view translations.

API Endpoints

Fetch All Locales

Get a list of all available locales and their translation counts.

GET /locale/fetch?uid=0
Authorization: Bearer <token>

Response:

{
  "status": 200,
  "message": "Locales successfully fetched!",
  "locales": [
    { "code": "en", "translation_count": 42 },
    { "code": "fr", "translation_count": 36 }
  ],
  "amount": 2
}

Fetch One Locale

Get all translations for a specific locale.

GET /locale/fetch/one?uid=0&locale=en
Authorization: Bearer <token>

Response:

{
  "status": 200,
  "message": "Locale successfully fetched!",
  "translations": {
    "common.welcome": "Welcome to Kinesis API",
    "common.error": "An error occurred",
    "common.buttons.save": "Save"
    // ... other translations
  }
}

Translate Text

Translate a specific key with optional variables and count for pluralization.

POST /locale/translate
Content-Type: application/json

{
  "locale": "en",
  "key": "notifications.message_count",
  "variables": {
    "user": "John"
  },
  "count": 5
}

Response:

{
  "status": 200,
  "message": "Translation successful!",
  "translation": "You have 5 new messages",
  "key": "notifications.message_count",
  "locale": "en"
}

Create Translation

Add a new translation key-value pair to a locale.

POST /locale/create
Content-Type: application/json
Authorization: Bearer <token>

{
  "uid": 0,
  "locale": "en",
  "key": "common.buttons.submit",
  "value": "Submit"
}

Response:

{
  "status": 200,
  "message": "Translation successfully added!",
  "locale": "en",
  "key": "common.buttons.submit"
}

Update Translation

Update an existing translation.

PATCH /locale/update
Content-Type: application/json
Authorization: Bearer <token>

{
  "uid": 0,
  "locale": "en",
  "key": "common.buttons.save",
  "value": "Save Changes"
}

Response:

{
  "status": 200,
  "message": "Translation successfully added!",
  "locale": "en",
  "key": "common.buttons.save"
}

Delete Translation

Delete a translation key from a locale.

DELETE /locale/delete?uid=0&locale=en&key=common.buttons.cancel
Authorization: Bearer <token>

Response:

{
  "status": 200,
  "message": "Translation successfully deleted!",
  "locale": "en",
  "key": "common.buttons.cancel"
}

Pluralization

The localization system supports proper pluralization rules for different languages. For example:

{
  "items_count": {
    "one": "You have {count} item",
    "other": "You have {count} items"
  }
}

For languages with more complex plural forms (like Slavic languages), additional forms are supported:

{
  "items_count": {
    "one": "У вас {count} элемент",
    "few": "У вас {count} элемента",
    "many": "У вас {count} элементов",
    "other": "У вас {count} элементов"
  }
}

The system automatically selects the correct plural form based on the count parameter and the language's pluralization rules.

Variable Interpolation

You can insert dynamic values into translations using curly braces:

{
  "welcome_user": "Welcome, {{name}}!"
}

Then provide the variables when translating:

{
  "locale": "en",
  "key": "welcome_user",
  "variables": {
    "name": "John"
  }
}

This will produce: "Welcome, John!"

Setup and Configuration

Translations are stored in JSON files in the /translations directory, with the filename matching the locale code (e.g., en.json, fr.json).

The localization system is initialized with all translations when Kinesis API is started.

Permissions

The following permissions are required for various localization operations:

  • LOCALE_FETCH: Required to fetch locales and translations
  • LOCALE_CREATE_UPDATE: Required to create or update translations
  • LOCALE_DELETE: Required to delete translations

Best Practices

  1. Structured Keys: Organize keys in a logical hierarchy (e.g., feature.component.text)
  2. Complete Translations: Ensure all keys have translations in all supported locales
  3. Reuse Common Phrases: Use the same key for identical text across different features
  4. Avoid Variable Concatenation: Use variable interpolation instead of string concatenation
  5. Test All Locales: Regularly test your application with all supported locales

Backups

The Backup system in Kinesis API provides automated backup and restore functionality for your database and configuration files. This built-in feature allows you to create, manage, and restore backups directly through the web interface or API, ensuring your data is protected and recoverable.

Overview

The backup system serves as a comprehensive data protection solution for:

  • Creating point-in-time snapshots of your entire database
  • Scheduling automatic backups with expiration dates
  • Restoring your system to a previous state
  • Managing backup storage and retention policies

All backups are stored as compressed archives containing your database files, ensuring efficient storage while maintaining data integrity.

Key Features

Automated Backup Creation

  • One-Click Backups: Create full system backups instantly through the web interface
  • Compressed Storage: Backups are automatically compressed to save storage space
  • Complete Coverage: Includes all database files (pages, blobs, and indexes)
  • Metadata Tracking: Each backup includes creation time, description, and expiry information

Backup Management

  • Descriptive Labels: Add custom descriptions to identify backup purposes
  • Expiration Dates: Set automatic expiration to manage storage space
  • Filtering and Search: Find specific backups quickly through search functionality
  • Bulk Operations: Manage multiple backups efficiently

Restore Functionality

  • Full System Restore: Restore your entire database from any backup
  • Engine Reset: Automatically reloads the database engine after restoration
  • Data Integrity: Maintains all relationships and constraints during restoration

Storage and Retention

  • Automatic Cleanup: Expired backups are automatically removed
  • Storage Optimization: TAR.GZ compression reduces backup file sizes
  • Secure Storage: Backups are stored in a dedicated, protected directory

User Permissions

Backup functionality requires ROOT privileges:

ROOT Users

  • Create new backups
  • View all existing backups
  • Restore from any backup
  • Delete backup files
  • Manage backup descriptions and expiry dates

Other User Roles

  • No access to backup functionality (security restriction)

Using the Backup Management Interface

Backup Management Page

The backup management interface provides comprehensive control over your backup operations:

Accessing Backup Management

  1. Log in with a ROOT account
  2. Navigate to /web/backups in your browser or select "Backups" from the sidebar menu

Creating a New Backup

To create a backup:

  1. Click the "Create a new backup" button
  2. Add an optional description to identify the backup purpose
  3. Set an optional expiry date for automatic cleanup
  4. Click "Create" to start the backup process

The system will:

  • Create a compressed archive of all database files
  • Store the backup with a timestamp-based filename
  • Add the backup record to the management interface

Viewing Backup Information

Each backup in the list displays:

  • Backup Name: Auto-generated filename with timestamp and ID
  • Creation Date: When the backup was created
  • Description: Custom description (if provided)
  • Expiry Date: When the backup will automatically expire (if set)

Managing Existing Backups

For each backup, you can:

Update Description

  1. Click the "Description" link next to any backup
  2. Edit the description text
  3. Click "Submit" to save changes

Update Expiry Date

  1. Click the "Expiry" link next to any backup
  2. Set a new expiry date/time or clear to remove expiration
  3. Click "Submit" to save changes

Restore from Backup

  1. Click the restore button (refresh icon) next to any backup
  2. Confirm the restoration in the warning dialog
  3. The system will restore all data and restart the database engine

⚠️ Warning: Restoring a backup will replace all current data with the backup data. This action cannot be undone.

Delete Backup

  1. Click the delete button (trash icon) next to any backup
  2. Confirm the deletion in the warning dialog
  3. The backup file and record will be permanently removed

Filtering and Searching

To find specific backups:

  1. Use the search bar to filter by backup name or description
  2. The list will automatically update to show matching backups
  3. Pagination controls help navigate through large backup collections

Backup File Structure

Backups are stored as TAR.GZ archives containing:

  • Configuration Files (.env): System configuration and environment variables
  • Database Pages (data/main_db.pages): Core database structure and data
  • Blob Storage (data/main_db.blobs): Large string data storage
  • Blob Index (data/main_db.blobs.idx): Blob storage index for quick access
  • Public Directory (public/): User-uploaded media files and assets
  • Translations Directory (translations/): Localization files and language packs

The backup filename format is:

backup-[YYYYMMDD_HHMM]_[backup_id].tar.gz

For example: backup-20250804_1430_15.tar.gz

Automatic Cleanup

The backup system includes automatic maintenance:

Expiry Processing

  • Expired backups are automatically identified and removed
  • This happens whenever backup operations are performed
  • Both the database record and the physical file are cleaned up

Storage Management

  • Only necessary database files are included in backups
  • TAR.GZ compression reduces storage requirements
  • Automatic cleanup prevents unlimited storage growth

API Integration

The backup system can also be accessed programmatically through the REST API:

Create Backup

POST /backup/create
Authorization: Bearer [token]
Content-Type: application/json

{
  "uid": 1,
  "backup": {
    "description": "Pre-deployment backup",
    "expiry": "2025-12-31T23:59:59+00:00"
  }
}

Restore Backup

GET /backup/restore?uid=1&id=15
Authorization: Bearer [token]

List Backups

GET /backup/fetch?uid=1&limit=50&offset=0
Authorization: Bearer [token]

Best Practices

For effective backup management:

Regular Backup Schedule

  • Create backups before major system changes
  • Establish a regular backup routine (daily/weekly)
  • Use descriptive names to identify backup purposes

Retention Strategy

  • Set appropriate expiry dates to manage storage
  • Keep recent backups for quick recovery
  • Archive important milestones for longer-term retention

Testing and Verification

  • Periodically test restore procedures
  • Verify backup integrity by checking file sizes and dates
  • Document your backup and restore procedures

Security Considerations

  • Limit backup access to ROOT users only
  • Monitor backup creation and restoration activities
  • Consider backing up the backup directory to external storage

Restoration Process

When restoring from a backup:

  1. System Shutdown: Current database operations are suspended
  2. File Replacement: Database files are replaced with backup versions
  3. Engine Restart: The database engine is reloaded with restored data
  4. Validation: System verifies the restoration was successful

This process ensures complete data consistency and system integrity after restoration.

Backup Scheduling

Backup Schedules Page

Kinesis API's scheduling feature allows you to automate the backup process with precise timing control using cron-style expressions.

Understanding Backup Schedules

Backup schedules provide:

  • Automated Creation: Backups run automatically on your defined schedule
  • Flexible Timing: From hourly to monthly schedules using cron expressions
  • Expiration Control: Set how long scheduled backups should be retained
  • Resource Optimization: Only create backups when needed

Accessing Backup Scheduling

  1. Log in with a ROOT account
  2. Navigate to /web/backups/schedules or select "Backup Schedules" from the Backups submenu

Creating a Backup Schedule

To create a schedule:

  1. Click the "Create a new backup schedule" button
  2. Fill in the schedule details:
    • Name: A descriptive name (e.g., "Daily Midnight Backup")
    • Schedule: Configure the cron expression components
    • Expiry Hours: How long (in hours) to keep backups created by this schedule
    • Enabled: Toggle to activate/deactivate the schedule
  3. Click "Create" to save and activate the schedule

Understanding Cron Expressions

Backup schedules use standard cron expressions with five fields:

* * * * *  command to be executed
- - - - -
| | | | |
| | | | +----- Day of the week (0 - 6) (Sunday is 0)
| | | +------- Month (1 - 12)
| | +--------- Day of the month (1 - 31)
| +----------- Hour (0 - 23)
+------------- Min (0 - 59)

Examples

  • 0 * * * *: At minute 0 past every hour
  • 30 1 * * *: At 01:30 AM every day
  • 0 0 * * 0: At midnight on Sundays
  • 15 14 1 * *: At 14:15 on the first day of every month

Managing Backup Schedules

For each schedule, you can:

Update Schedule

  1. Click the "Edit" button next to any schedule
  2. Modify the schedule details
  3. Click "Save" to apply changes

Delete Schedule

  1. Click the delete button (trash icon) next to any schedule
  2. Confirm the deletion in the warning dialog
  3. The schedule will be permanently removed

Monitoring Scheduled Backups

Scheduled backups are listed with:

  • Schedule Name: Descriptive name of the schedule
  • Next Run: When the backup will next run
  • Expiry: How long backups are retained
  • Status: Enabled or disabled state

Troubleshooting

Common Issues

Backup Creation Failed

  • Check available disk space
  • Verify database file permissions
  • Ensure the backups directory exists and is writable

Restoration Failed

  • Verify the backup file exists and isn't corrupted
  • Check that the backup contains all required database files
  • Ensure sufficient disk space for restoration

Missing Backups

  • Check if backups have expired and been automatically cleaned up
  • Verify backup directory location and permissions
  • Look for backup files in the correct directory

Scheduled Backup Not Running

  • Check if the schedule is enabled
  • Verify the cron expression is correct
  • Ensure the system time is correct

Backup Files Not Expiring

  • Check the expiry hours setting for the schedule
  • Verify that the cleanup process is running
  • Ensure there are no permission issues with the backup files

Content History

The Content History system in Kinesis API provides version tracking and restoration capabilities for key content types. This feature automatically preserves previous versions of content, allowing administrators to recover from accidental changes, track content evolution, or restore content to a known-good state when needed.

Overview

Content History serves as a version control system for:

  • Data objects (structured content)
  • Routes (API definitions)
  • Blog posts (published articles)

The system automatically captures snapshots of content whenever changes are made, storing these versions in a chronological history that can be browsed and restored when needed.

Key Features

Automatic Version Tracking

  • Seamless Capture: Every update to supported content types automatically creates a history entry
  • Comprehensive Metadata: Each version includes creation timestamp and complete content state
  • Zero Configuration: Works out-of-the-box with no setup required
  • Storage Optimization: Only the last 15 versions are retained to conserve storage space

Version Management

  • Historical Timeline: Browse all previous versions of content by date
  • One-Click Restoration: Restore any previous version with a simple confirmation
  • Safe Restoration: Current version remains available in history if restoration is needed
  • Metadata Retention: All associated data is preserved during restoration

Security and Access Control

  • Root-Only Access: Content history management is restricted to ROOT users
  • Audit Trail: Changes are tracked with timestamps for accountability
  • Non-Destructive: Restoration adds to history rather than erasing it

Supported Content Types

The Content History system currently supports three primary content types:

Data Objects

Complete version history is maintained for data objects, including:

  • All structure values
  • Custom structure values
  • Metadata and configuration

Routes

API route definitions are tracked, preserving:

  • Route configurations
  • Authentication settings
  • Parameters and body definitions
  • Full visual flow definitions

Blog Posts

Blog post versions include:

  • Content body
  • Title, subtitle, and slug
  • Tags and metadata
  • Media references
  • Visibility and publication status

User Permissions

Content History functionality is currently limited to ROOT users only for security reasons:

ROOT Users

  • View content history for all content types
  • Restore any content to previous versions
  • Access the history management interface

Other User Roles

  • No access to content history functionality
  • Cannot view or restore previous versions

Using Content History

The Content History interface is integrated into the edit screens for supported content types:

Content History Interface

Accessing Version History

For ROOT users, a "Restore Previous Version" button appears in the edit interface for:

  1. Data Objects: On the data edit page
  2. Routes: On the route edit page
  3. Blog Posts: On the blog post edit page

Clicking this button opens a modal with a chronological list of previous versions.

Viewing Available Versions

The version history modal displays:

  • Version ID
  • Creation date and time
  • Sorted with newest versions at the top

Restoring a Previous Version

To restore content to a previous version:

  1. Click the "Restore Previous Version" button
  2. Select the desired version from the list
  3. Confirm the restoration in the confirmation dialog
  4. The content will be restored to the selected version
  5. A success message confirms the restoration

The restoration process:

  • Replaces the current content with the selected version
  • Preserves the history, including the version you just replaced
  • Maintains all relationships and references
  • Automatically applies any structure changes

Technical Implementation

Content History leverages several key components:

Storage Mechanism

  • History entries are stored in a dedicated content_history table
  • Each entry contains a snapshot of the entire content state
  • Large content is automatically handled via the blob storage system
  • Content is serialized using a standardized format for consistency

Tracking Algorithm

When content is updated:

  1. The system captures a complete snapshot before changes are applied
  2. The snapshot is stored with metadata including timestamp and content type
  3. Older entries beyond the retention limit are automatically pruned
  4. References to blob storage are properly maintained

Restoration Process

During restoration:

  1. The selected version is deserialized from storage
  2. Current content is completely replaced with the historical version
  3. Any necessary data transformations are applied
  4. The database transaction ensures atomicity
  5. A new history entry is created for the current state (pre-restoration)

Best Practices

For effective use of Content History:

Regular Reviews

  • Periodically review content history for important assets
  • Consider restoration for recovering accidentally deleted content
  • Use history to understand content evolution over time

Before Major Changes

  • Note the timestamp before making significant content changes
  • This helps identify the correct version for restoration if needed
  • Consider creating descriptive commit messages in future versions

When to Restore

Restoration is particularly valuable when:

  • Content was accidentally deleted or corrupted
  • Previous content needs to be referenced or reused
  • Comparing current content with historical versions
  • Undoing changes that didn't meet expectations

Storage Considerations

  • Be aware that only the last 15 versions are retained
  • Plan important content updates accordingly
  • Export critical content before major changes

Future Enhancements

The Content History system will be expanded in future releases:

  • Access for additional user roles with appropriate permissions
  • Descriptive labels for important versions
  • Side-by-side comparison of versions
  • More granular control over retention policies
  • Additional content types support

Troubleshooting

Common Issues

History Not Appearing

  • Verify you have ROOT privileges
  • Check that the content type is supported
  • Ensure content has been modified at least once

Restoration Failed

  • Check that the content still exists
  • Verify database connectivity
  • Ensure sufficient storage space is available

Missing Versions

  • Older versions beyond the 15-version limit are automatically pruned
  • Check if the content has undergone many revisions

Misc

The Misc page in Kinesis API provides a collection of utility tools and functions that can be helpful during development, testing, and general platform management. These tools don't fit into other categories but offer valuable functionality for various tasks.

Accessing the Misc Page

To access the Misc utilities:

  1. Log in to your Kinesis API account
  2. Navigate to /web/misc in your browser or select "Misc" from the navigation menu

Misc Utilities Page

Available Utilities

The Misc page offers several utility tools:

Test MongoDB Connection

Note: This feature is only available to users with ROOT or ADMIN roles.

This utility allows you to test the connection to a MongoDB database:

  1. Click the "Test Mongo Connection" button
  2. Enter the MongoDB URI (in the format mongodb://username:password@host:port/database)
  3. Enter the database name
  4. Click "Test" to verify the connection

This is particularly useful when setting up external data sources or verifying database configurations.

Test SMTP Credentials

Note: This feature is only available to users with ROOT or ADMIN roles.

This tool validates SMTP email server credentials:

  1. Click the "Test SMTP Credentials" button
  2. Enter the required information:
    • Username
    • From Username (optional)
    • Password
    • Host address
    • Port number
    • Login mechanism (PLAIN, LOGIN, or XOAUTH2)
    • StartTLS setting
  3. Click "Test" to verify the credentials

A successful test indicates that Kinesis API can use these credentials to send emails, which is critical for features like user registration and password reset.

Generate a Random UUID

This utility generates random unique identifiers with customizable formatting:

  1. Click the "Generate a Random UUID" button
  2. Configure the UUID format:
    • Length: The number of characters in each group
    • Groups: The number of groups to include
    • Include an additional number block: Option to add a numeric suffix
  3. Click "Generate" to create the UUID
  4. Copy the generated UUID using the "Copy UUID" button

UUIDs are useful for creating unique identifiers for resources, temporary tokens, or any scenario where uniqueness is required.

Generate a Random Secret

This tool creates secure random strings for use as secrets, passwords, or tokens:

  1. Click the "Generate a Random Secret" button
  2. Configure the secret:
    • Length: The number of characters in the secret
    • Include special characters: Whether to include symbols in addition to letters and numbers
  3. Click "Generate" to create the secret
  4. Copy the generated secret using the "Copy Secret" button

This is particularly useful for generating secure API keys, passwords, or other sensitive credentials.

URL Shortener

This utility creates shortened URLs for any link:

  1. Click the "URL Shortener" button
  2. Enter the long URL you want to shorten
  3. Click "Shorten" to create a shortened link
  4. Copy the shortened URL using the "Copy URL" button

The shortened URLs are in the format [api_url]/go/[locator]. These links can be shared with others to provide more manageable URLs for long addresses.

Use Cases

These utility tools are valuable in various scenarios:

  • During Setup: Testing MongoDB and SMTP configurations
  • Development: Generating UUIDs and secrets for testing or implementation
  • Content Sharing: Creating shortened URLs for easier sharing
  • Security: Generating strong, random secrets for sensitive operations
  • Troubleshooting: Verifying connectivity to external services

Global Navigate

Global Navigate is a powerful keyboard-driven navigation system built into Kinesis API that allows you to quickly jump to any part of the application without using the mouse. This feature significantly speeds up your workflow by providing shortcuts to navigate between projects, collections, data, and other components.

Accessing Global Navigate

There are two ways to open the Global Navigate modal:

  1. Keyboard Shortcut: Press Ctrl+G (Windows/Linux) or ⌘+G (Mac)
  2. UI Button: Click the keyboard shortcut hint in the bottom-right corner of the screen

Basic Navigation

The Global Navigate system accepts various commands in a simple, intuitive syntax:

  1. Enter a base command (e.g., p for projects)
  2. Optionally add IDs to navigate to specific resources (e.g., p/project_id)
  3. Press Enter to navigate

Base Commands

Global Navigate supports numerous shorthand commands:

CommandAliasesDestination
p, pr, proproject, projectsProjects page
c, colcollectionCollection page
cscustom, customstructure, custom_structureCustom Structure page
ddataData management
rroute, routesRoutes page
pg, playplaygroundPlayground
u, ususer, usersUsers management
patpatsPersonal Access Tokens
mmediaMedia management
red, rsredirectsRedirects
snp, snipsnippet, snippetsSnippets
mcmiscMiscellaneous utilities
e, eseventsEvents log
confconfig, configsConfiguration settings
constconstraint, constraintsConstraints
ssettingsUser settings
h, dashhome, dashboardDashboard
aaboutAbout page
ch, changechangelogChangelog
roadroadmapRoadmap
repl, shellREPL shell
logout, endLog out
b, bloblogBlog posts
l, loclocale, localesLocalization
t, tic, tickticket, ticketsTicketing system
bkp, back, backupbackupsBackups

Advanced Navigation Patterns

For hierarchical resources, you can navigate directly to specific items by adding IDs to your command with forward slashes:

Projects Navigation

  • p - Go to projects list
  • p/project_id - Go directly to a specific project

Collections Navigation

  • c - Go to projects list
  • c/project_id - Go to a specific project
  • c/project_id/collection_id - Go directly to a specific collection

Custom Structures Navigation

  • cs - Go to projects list
  • cs/project_id - Go to a specific project
  • cs/project_id/collection_id - Go to a specific collection
  • cs/project_id/collection_id/custom_structure_id - Go directly to a specific custom structure

Data Navigation

  • d - Go to data projects list
  • d/project_id - Go to a project's collections for data
  • d/project_id/collection_id - Go to data objects in a collection
  • d/project_id/collection_id/data_id - Go directly to a specific data object

Routes Navigation

  • r - Go to routes projects list
  • r/project_id - Go to routes in a project
  • r/project_id/route_id - Go directly to a specific route

Playground Navigation

  • pg - Go to playground projects list
  • pg/project_id - Go to routes in a project's playground
  • pg/project_id/route_id - Go directly to testing a specific route

Snippets Navigation

  • snp - Go to snippets list
  • snp/snippet_id - Go directly to a specific snippet

Users Navigation

  • u - Go to users list
  • u/username - Go directly to a specific user's profile

Blog Navigation

  • b - Go to blog posts list
  • b/post_id - Go directly to a specific blog post

Tickets Navigation

  • t - Go to tickets list
  • t/ticket_id - Go directly to a specific ticket

Usage Examples

CommandResult
pNavigate to the projects list
p/inventoryNavigate to the inventory project
c/inventory/productsNavigate to the products collection within the inventory project
d/blog/posts/post123Navigate to the post123 data object in the posts collection
r/api/authNavigate to the auth route in the api project
pg/shop/checkoutTest the checkout route in the playground
u/adminView the admin user's profile
settingsGo to your user settings page
logoutLog out of the system
b/welcome-postView the welcome-post blog article
t/support-requestView the support-request ticket

Benefits of Global Navigate

Global Navigate offers several advantages for power users:

  • Speed: Navigate anywhere in the application with just a few keystrokes
  • Efficiency: Reduce dependency on mouse movements and menu navigation
  • Directness: Jump directly to deeply nested resources without navigating through multiple pages
  • Accessibility: Provide keyboard-focused navigation options
  • Productivity: Streamline repetitive navigation tasks

Tips for Effective Use

  • Learn the Shortcuts: Memorize the base commands for sections you frequently visit
  • Use Tab Completion: In the future, Global Navigate may support tab completion
  • Bookmark Common Paths: Keep a note of full paths for resources you access regularly
  • Direct Navigation: Instead of navigating through multiple pages, use the full path pattern

Backups

Regular backups are a critical component of maintaining a reliable Kinesis API installation. This guide covers how to properly back up your Kinesis API data and configuration, ensuring you can recover from unexpected failures or data corruption.

Backup Methods

Kinesis API offers multiple backup approaches to suit different needs and technical preferences:

New in version 0.31.0: Kinesis API now includes a comprehensive backup system accessible through the web interface and API. This is the recommended method for most users as it provides:

  • Automated backup creation and management
  • Built-in compression and storage optimization
  • Integrated restore functionality
  • Expiration and retention management
  • User-friendly web interface

Getting Started:

  1. Log in to your Kinesis API web interface as a ROOT user
  2. Navigate to the "Backups" section in the sidebar
  3. Click "Create a new backup" to generate your first backup

For detailed instructions on using the built-in backup system, see the Backup Management page.

Benefits:

  • No manual file handling required
  • Automatic compression reduces storage space
  • Built-in validation ensures backup integrity
  • Simple restore process with automatic engine restart
  • Integrated with the permission system for security

2. Manual File-Based Backup

For users who prefer direct file system control or need custom backup procedures:

Understanding Kinesis API's Data Storage

Before discussing backup strategies, it's important to understand where Kinesis API stores its data:

  1. Database Files: Located in the data/ directory
  2. Configuration: Stored in the .env file
  3. Media Files: Stored in the public/ directory
  4. Translations: Stored in the translations/ directory

A complete backup must include all three components to ensure full recovery.

Manual Backup Methods

Docker Installation Backup:

If you're running Kinesis API via Docker (the recommended method), follow these steps:

  1. Stop the container (optional but recommended for consistency):

    docker stop kinesis-api
    
  2. Create the backup directory:

    mkdir backup/
    
  3. Back up the data directory:

    cp -r data/ backup/data/
    
  4. Back up the environment file:

    cp .env backup/.env
    
  5. Back up the public directory (if you've uploaded media):

    cp -r public/ backup/public/
    
  6. Back up the translations directory (if you're making use of the localization engine):

    cp -r translations/ backup/translations/
    
  7. Restart the container (if you stopped it):

    docker start kinesis-api
    

Native Installation Backup:

If you're running Kinesis API directly on your host:

  1. Stop the Kinesis API service:

    # If using systemd
    sudo systemctl stop kinesis-api
    
    # Or if running directly
    kill $(pgrep kinesis-api)
    
  2. Back up the required directories and files:

    cp -r /path/to/kinesis-api/data/ /path/to/backup/data/
    cp /path/to/kinesis-api/.env /path/to/backup/.env
    cp -r /path/to/kinesis-api/public/ /path/to/backup/public/
    cp -r /path/to/kinesis-api/translations/ /path/to/backup/translations/
    
  3. Restart the service:

    # If using systemd
    sudo systemctl start kinesis-api
    
    # Or if running directly
    cd /path/to/kinesis-api && ./target/release/kinesis-api &
    

Automated Backup Scripts

Simple Daily Backup Script

Create a file called backup-kinesis-api.sh:

#!/bin/bash
# Kinesis API Backup Script

# Configuration
KINESIS_DIR="/path/to/kinesis-api"
BACKUP_DIR="/path/to/backups"
BACKUP_NAME="kinesis-api-backup-$(date +%Y%m%d-%H%M%S)"

# Create backup directory
mkdir -p "$BACKUP_DIR/$BACKUP_NAME"

# Optional: Stop the container for consistent backups
docker stop kinesis-api

# Copy the data
cp -r "$KINESIS_DIR/data" "$BACKUP_DIR/$BACKUP_NAME/"
cp "$KINESIS_DIR/.env" "$BACKUP_DIR/$BACKUP_NAME/"
cp -r "$KINESIS_DIR/public" "$BACKUP_DIR/$BACKUP_NAME/"
cp -r "$KINESIS_DIR/translations" "$BACKUP_DIR/$BACKUP_NAME/"

# Restart the container
docker start kinesis-api

# Compress the backup
cd "$BACKUP_DIR"
tar -czf "$BACKUP_NAME.tar.gz" "$BACKUP_NAME"
rm -rf "$BACKUP_NAME"

# Optional: Rotate backups (keep last 7 days)
find "$BACKUP_DIR" -name "kinesis-api-backup-*.tar.gz" -type f -mtime +7 -delete

echo "Backup completed: $BACKUP_DIR/$BACKUP_NAME.tar.gz"

Make the script executable and schedule it with cron:

chmod +x backup-kinesis-api.sh
crontab -e

Add a line to run it daily at 2 AM:

0 2 * * * /path/to/backup-kinesis-api.sh

Restoring from Backup

Kinesis API provides multiple restoration methods depending on how your backup was created:

The built-in backup system provides the simplest restoration process:

Through Web Interface:

  1. Access the Backups Page: Log in as a ROOT user and navigate to /web/backups
  2. Select Backup: Find the backup you want to restore from the list
  3. Click Restore: Click the restore button (refresh icon) next to your chosen backup
  4. Confirm Action: Confirm the restoration in the warning dialog
  5. Automatic Process: The system will:
    • Replace all current data with backup data
    • Restart the database engine automatically
    • Verify the restoration was successful

Through API:

# Replace with your actual values
curl -X GET "http://your-api-url/backup/restore?uid=1&id=backup_id" \
  -H "Authorization: Bearer your-jwt-token"

Important Notes:

  • ⚠️ Warning: Restoring a backup will replace ALL current data with the backup data
  • This action cannot be undone
  • The system automatically handles database engine restart
  • No manual file manipulation is required

Choosing the Right Backup:

When selecting a backup for restoration:

  • Check Creation Time: Ensure you're selecting the backup from the correct time period
  • Read Description: Use backup descriptions to identify the purpose (e.g., "Before major update")
  • Consider Data Loss: Any data created after the backup timestamp will be lost
  • Verify Backup Age: Check that the backup hasn't expired and been automatically cleaned up

2. Manual File-Based Restoration

For backups created using manual file methods:

Docker Installation Restoration:

  1. Stop the running instance:

    docker stop kinesis-api
    
  2. Replace the data with the backup:

    # If your backup is compressed
    tar -xzf kinesis-backup.tar.gz
    
    # Replace the current data
    rm -rf data/ .env public/ translations/
    cp -r backup/data/ .
    cp backup/.env .
    cp -r backup/public/ .
    cp -r backup/translations/ .
    
  3. Restart the service:

    docker start kinesis-api
    

Native Installation Restoration:

  1. Stop the Kinesis API service:

    # If using systemd
    sudo systemctl stop kinesis-api
    
    # Or if running directly
    kill $(pgrep kinesis-api)
    
  2. Replace the files:

    # Backup current state (optional safety measure)
    cp -r /path/to/kinesis-api/data/ /path/to/kinesis-api/data.backup/
    
    # Restore from backup
    rm -rf /path/to/kinesis-api/data/
    cp -r /path/to/backup/data/ /path/to/kinesis-api/
    cp /path/to/backup/.env /path/to/kinesis-api/
    cp -r /path/to/backup/public/ /path/to/kinesis-api/
    cp -r /path/to/backup/translations/ /path/to/kinesis-api/
    
  3. Restart the service:

    # If using systemd
    sudo systemctl start kinesis-api
    
    # Or if running directly
    cd /path/to/kinesis-api && ./target/release/kinesis-api &
    

3. Hybrid Restoration Approach

You can also combine methods for maximum flexibility:

Scenario: Restore Built-in Backup to External Environment

  1. Download Backup: Use the built-in system to create a backup
  2. Export Data: Access the backup files from the system
  3. Manual Transfer: Copy to your target environment
  4. Manual Restoration: Use file-based restoration on the target

Scenario: Import Manual Backup into Built-in System

  1. Create Manual Backup: Using your existing file-based process
  2. Import to System: Manually copy files to Kinesis API directory
  3. Create Built-in Backup: Use the web interface to create a new backup for future use
  4. Built-in Restoration: Use the integrated system for future restorations

Restoration Best Practices

Before Restoration:

  1. Create Current Backup: Always backup your current state before restoring
  2. Verify Backup Integrity: Ensure the backup you're restoring from is not corrupted
  3. Check Dependencies: Verify that environment configurations match
  4. Plan Downtime: Coordinate restoration during maintenance windows

During Restoration:

  1. Follow Exact Steps: Don't skip steps or modify the process
  2. Monitor Logs: Watch for error messages during the restoration
  3. Verify Completeness: Ensure all files are restored properly

After Restoration:

  1. Test Functionality: Verify that all features work as expected
  2. Check Data Integrity: Confirm that data is accessible and correct
  3. Update Documentation: Record the restoration in your maintenance logs
  4. User Communication: Notify users of any changes or data loss

Troubleshooting Restoration Issues:

Built-in System Issues:

  • Backup Not Found: Check that the backup hasn't expired
  • Permission Errors: Ensure you're logged in as a ROOT user
  • Restoration Fails: Check system logs for specific error messages

Manual Restoration Issues:

  • File Permission Errors: Ensure proper ownership and permissions
  • Database Corruption: Try using an earlier backup
  • Configuration Mismatches: Verify environment variables match

Recovery from Failed Restoration:

If a restoration fails:

  1. Stop the Service: Prevent further damage
  2. Restore Previous State: Use your pre-restoration backup
  3. Investigate Cause: Check logs and file integrity
  4. Try Alternative Method: Consider using the other restoration approach
  5. Seek Support: Contact support if issues persist

Restoration Testing

Regular Testing Schedule:

  • Test restoration procedures monthly in a non-production environment
  • Document the complete process and timing
  • Verify that all data types and features work correctly after restoration

Testing Environment Setup:

  1. Isolated Environment: Use a separate test environment
  2. Production-like Configuration: Mirror your production setup
  3. Test Data: Use anonymized production data for realistic testing

Upgrading

This guide provides instructions for upgrading your Kinesis API installation to newer versions. Because Kinesis API is actively developed, new releases may include important security updates, bug fixes, and new features.

Before You Begin

Before upgrading your Kinesis API installation, it's important to take some precautionary steps:

  1. Create a backup: Always back up your data before upgrading. See the Backups guide for detailed instructions.

  2. Review the changelog and roadmap:

    • Check the Changelog at /web/changelog for detailed information about each version's changes, including any breaking changes or deprecations
    • Review the Roadmap at /web/roadmap for a higher-level overview of monthly development progress and upcoming features
  3. Test in a non-production environment: If possible, test the upgrade process in a development or staging environment before applying it to production.

Upgrading Docker Installations

If you're running Kinesis API via Docker (the recommended method), follow these steps:

Step 1: Pull the Latest Image

# Pull the latest version
docker pull edgeking8100/kinesis-api:latest

# Or pull a specific version
docker pull edgeking8100/kinesis-api:x.y.z

Step 2: Stop the Current Container

docker stop kinesis-api

Step 3: Remove the Current Container (preserving volumes)

docker rm kinesis-api

Step 4: Start a New Container with the Updated Image

docker run --name kinesis-api \
  -v $(pwd)/.env:/app/.env \
  -v $(pwd)/data:/app/data \
  -v $(pwd)/public:/app/public \
  -v $(pwd)/translations:/app/translations \
  -p 8080:8080 -d \
  --restart unless-stopped \
  edgeking8100/kinesis-api:latest

Step 5: Verify the Upgrade

  1. Check the container logs for any errors:
docker logs kinesis-api
  1. Access the web interface to confirm it's working correctly.

Upgrading Rust Installations

If you're running Kinesis API directly from the Rust source:

Step 1: Backup Your Data and Environment

cp -r data/ data_backup/
cp .env .env.backup

Step 2: Update the Source Code

git pull origin master

Step 3: Build the Updated Version

cargo build --release

Step 4: Restart the Service

# If running as a systemd service
sudo systemctl restart kinesis-api

# Or if running directly
./target/release/kinesis-api

Post-Upgrade Steps

After upgrading, perform these checks:

  1. Verify all projects and collections are accessible
  2. Test API endpoints to ensure they're functioning correctly
  3. Review logs for any errors or warnings
  4. Check database integrity if you've upgraded across major versions

Handling Breaking Changes

Kinesis API follows semantic versioning (SemVer):

  • Patch updates (e.g., 1.0.0 to 1.0.1): Safe to upgrade, contains bug fixes
  • Minor updates (e.g., 1.0.0 to 1.1.0): Generally safe, adds new features in a backward-compatible way
  • Major updates (e.g., 1.0.0 to 2.0.0): May contain breaking changes

For major version upgrades, read the release notes carefully as they will include:

  • List of breaking changes
  • Required manual steps
  • Migration guides

Rollback Procedures

If you encounter issues after upgrading, you can roll back to the previous version:

Docker Rollback

# Stop the current container
docker stop kinesis-api
docker rm kinesis-api

# Start a new container with the previous version
docker run --name kinesis-api \
  -v $(pwd)/.env:/app/.env \
  -v $(pwd)/data:/app/data \
  -v $(pwd)/public:/app/public \
  -v $(pwd)/translations:/app/translations \
  -p 8080:8080 -d \
  --restart unless-stopped \
  edgeking8100/kinesis-api:previous-version-tag

Rust Rollback

# Checkout the previous version
git checkout v1.x.x

# Rebuild
cargo build --release

# Restart
sudo systemctl restart kinesis-api  # or start manually

Troubleshooting Common Upgrade Issues

Database Connection Errors

If you see database connection errors after upgrading:

  1. Check that your data directory permissions are correct
  2. Verify the database connection settings in your .env file
  3. Ensure the data format is compatible with the new version

UI Issues After Upgrade

If the web interface appears broken after upgrading:

  1. Clear your browser cache
  2. Check for JavaScript console errors
  3. Verify that all static assets are being served correctly

API Endpoints Not Working

If API endpoints stop working after an upgrade:

  1. Check the route configuration in the X Engine
  2. Verify that any changes to authentication methods have been accounted for
  3. Look for changes in parameter handling or response formats

Getting Help

If you encounter issues during the upgrade process:

Kinesis DB: Technical Documentation

This document provides a deep technical overview of the Kinesis database layer, focusing on its architecture, core modules, transaction management, persistence, and extensibility. It is intended for developers working on or extending the database internals.


Architecture Overview

The Kinesis database layer is designed as a modular, transactional, and persistent storage engine supporting multiple backends:

  • InMemory: Purely in-memory, non-persistent.
  • OnDisk: Fully persistent, disk-based.
  • Hybrid: Combines in-memory caching with disk persistence for performance.

The main entry point is the DBEngine, which orchestrates all database operations, transaction management, and persistence.


Core Modules

1. database/engine.rs (DBEngine)

  • Central orchestrator for all database operations.
  • Manages:
    • In-memory state (Database)
    • Disk persistence (via PageStore, BufferPool, BlobStore)
    • Transaction lifecycle (TransactionManager)
    • Write-Ahead Logging (WriteAheadLog)
    • Table schemas and schema migrations
    • Optional secondary indexes (IndexManager)

Key Responsibilities

  • Transaction Management: Begin, commit, rollback, and validate transactions.
  • Record Operations: Insert, update, delete, and search records with schema validation.
  • Persistence: Save/load state to/from disk, manage page allocation, and handle large data via blobs.
  • Crash Recovery: Replay WAL and restore consistent state after failures.
  • Schema Evolution: Support for schema updates with migration and validation.

2. database/database.rs (Database)

  • Represents the in-memory state of all tables.
  • Each table is a Table with its own schema, records, and next record ID.

3. database/table.rs (Table)

  • Stores records as a map of record ID to Record.
  • Enforces schema constraints and manages record versioning.

4. database/record.rs (Record)

  • Represents a single row in a table.
  • Contains:
    • id: Unique identifier
    • values: Map of field name to ValueType
    • version and timestamp for MVCC and concurrency control

5. database/schema.rs (TableSchema, FieldConstraint)

  • Defines table structure, field types, constraints (required, unique, min/max, pattern), and default values.
  • Supports schema validation and migration logic.

6. database/value_type.rs (ValueType, StringValue)

  • Strongly-typed representation of all supported field types (Int, Float, String, Bool, etc.).
  • StringValue supports both inline and blob-referenced storage for large strings.

7. storage/page_store.rs, storage/buffer_pool.rs, storage/blob_store.rs

  • PageStore: Manages allocation, reading, and writing of fixed-size pages on disk.
  • BufferPool: In-memory cache for pages, with LRU eviction.
  • BlobStore: Efficient storage for large binary/string data, with reference counting and garbage collection.

8. storage/wal.rs (WriteAheadLog)

  • Ensures durability and crash recovery by logging all transactional changes before commit.

9. transaction/manager.rs, transaction/transaction.rs

  • TransactionManager: Tracks active transactions, locks, timeouts, and deadlock detection.
  • Transaction: Encapsulates all pending changes, isolation level, and MVCC snapshot.

Key Functions and Internal Workings

Loading the Database (DBEngine::load_from_disk)

  • Purpose: Loads the database state from disk into memory.
  • How it works:
    • Reads the Table of Contents (TOC) from the first page using the buffer pool and page store.
    • Deserializes table locations, schemas, and next record IDs.
    • For each table, loads all records from their page chains (handling both old and new page formats, including overflow pages).
    • Reconstructs in-memory Table objects and inserts them into the database.
    • Skips loading for InMemory databases.

Saving the Database (DBEngine::save_to_disk)

  • Purpose: Persists the current in-memory state to disk.
  • How it works:
    • Iterates over all tables, serializes their records in batches.
    • Writes each batch to disk using the page store, allocating new pages as needed.
    • Updates the TOC with new page locations, schemas, and next IDs.
    • Flushes all pages via the buffer pool and syncs the page store to ensure durability.
    • Returns a checksum of the serialized state for verification.

Committing Transactions (DBEngine::commit and commit_internal)

  • Purpose: Atomically applies all changes in a transaction, ensuring ACID properties.
  • How it works:
    • Validates the transaction for isolation level conflicts (e.g., repeatable read, serializable).
    • Acquires necessary locks for all records being written.
    • Applies all staged changes (inserts, updates, deletes, schema changes) to the in-memory database.
    • Logs the transaction to the WAL (unless it's a blob operation or in-memory DB).
    • Persists changes to disk (if not in-memory).
    • Releases locks and ends the transaction.
    • On failure, rolls back all changes using the transaction's snapshot.

Rolling Back Transactions (DBEngine::rollback)

  • Purpose: Reverts all changes made by a transaction if commit fails or is aborted.
  • How it works:
    • Restores original state for all modified records using the transaction's snapshot.
    • Releases any acquired locks.
    • Cleans up any staged blob references.

Transaction Lifecycle

  • Begin: begin_transaction creates a new transaction, optionally with a snapshot for MVCC.
  • Read/Write: All record operations are staged in the transaction's pending sets.
  • Commit: See above.
  • Rollback: See above.

Table and Record Operations

  • Creating Tables: create_table or create_table_with_schema adds a new table definition to the transaction's pending creates.
  • Inserting Records: insert_record validates the record against the schema, handles large strings (blobs), and stages the insert.
  • Updating Records: update_record validates updates, manages blob references, and stages the update.
  • Deleting Records: delete_record stages the deletion and tracks any blob references for cleanup.

Blob Storage

  • Large strings are stored in the BlobStore if they exceed a threshold.
  • Records store a reference to the blob key.
  • On update or delete, old blob references are cleaned up.
  • Blob store is synced to disk after changes.

Write-Ahead Log (WAL)

  • Purpose: Ensures durability and crash recovery.
  • How it works:
    • All transactional changes are serialized and appended to the WAL before being applied.
    • On startup, the WAL is replayed to recover any committed but not yet persisted transactions.

Crash Recovery (DBEngine::recover_from_crash)

  • Purpose: Restores a consistent state after a crash.
  • How it works:
    • Loads the database from disk.
    • Loads and replays all valid transactions from the WAL.
    • Applies each transaction and persists the state after each.

Schema Management

  • Schema Validation: All record operations are validated against the current table schema.
  • Schema Migration: update_table_schema stages a schema update, which is validated for compatibility and applied atomically.

Transaction Types and Isolation Levels

Kinesis supports four transaction isolation levels, each with distinct semantics and internal handling:

1. ReadUncommitted

  • Behavior: Transactions can see uncommitted changes from other transactions ("dirty reads").
  • Implementation: No snapshot is taken. Reads always reflect the latest in-memory state, regardless of commit status.
  • Use Case: Maximum performance, minimal isolation. Rarely recommended except for analytics or non-critical reads.

2. ReadCommitted

  • Behavior: Transactions only see data that has been committed by other transactions.
  • Implementation:
    • No snapshot is taken.
    • On each read, the engine checks the committed state of the record.
    • During commit, the engine validates that records read have not changed since they were read (prevents "non-repeatable reads").
  • Use Case: Standard for many OLTP systems, balances consistency and concurrency.

3. RepeatableRead

  • Behavior: All reads within a transaction see a consistent snapshot of the database as of the transaction's start.
  • Implementation:
    • A full snapshot of the database is taken at transaction start.
    • Reads are served from the snapshot.
    • On commit, the engine validates that no records in the read set have changed since the snapshot.
    • Prevents non-repeatable reads and phantom reads (to the extent possible without full serializability).
  • Use Case: Applications requiring strong consistency for reads within a transaction.

4. Serializable

  • Behavior: Transactions are fully isolated as if executed serially.
  • Implementation:
    • Like RepeatableRead, but with additional validation:
      • Checks for write-write conflicts.
      • Ensures no new records (phantoms) have appeared in the range of interest.
      • Validates that the snapshot and current state are equivalent for all read and written data.
    • May abort transactions if conflicts are detected.
  • Use Case: Highest level of isolation, required for financial or critical systems.

Transaction Internals

Transaction Structure

Each transaction (Transaction) tracks:

  • ID: Unique identifier.
  • Isolation Level: Determines snapshot and validation logic.
  • Snapshot: Optional, for higher isolation levels.
  • Read Set: Records read (table, id, version).
  • Write Set: Records written (table, id).
  • Pending Operations: Inserts, updates, deletes, schema changes.
  • Metadata: For tracking blob references and other auxiliary data.

Transaction Flow

  1. Begin: DBEngine::begin_transaction creates a new transaction, possibly with a snapshot.
  2. Read: Reads are tracked in the read set. For RepeatableRead/Serializable, reads come from the snapshot.
  3. Write: Writes are staged in the write set and pending operations.
  4. Validation: On commit, the engine validates the transaction according to its isolation level.
  5. Commit: If validation passes, changes are applied atomically.
  6. Rollback: If validation fails or an error occurs, all changes are reverted.

Locking and Deadlock Detection

  • Locks: Acquired at the record level for writes. Managed by TransactionManager.
  • Deadlock Detection: Periodically checks for cycles in the lock graph. If detected, aborts one or more transactions.
  • Timeouts: Transactions can be configured to expire after a set duration.

Detailed Function Descriptions

DBEngine::begin_transaction

  • Purpose: Starts a new transaction.
  • Details: Assigns a unique ID, sets the isolation level, and optionally takes a snapshot of the database for higher isolation.

DBEngine::commit

  • Purpose: Commits a transaction, applying all staged changes.
  • Details:
    • Validates the transaction (see above).
    • Acquires all necessary locks.
    • Applies inserts, updates, deletes, and schema changes.
    • Logs the transaction to the WAL (unless skipped for blob ops).
    • Persists changes to disk.
    • Releases locks and ends the transaction.

DBEngine::rollback

  • Purpose: Rolls back a transaction, reverting all staged changes.
  • Details: Uses the transaction's snapshot and pending operations to restore the previous state.

DBEngine::insert_record

  • Purpose: Stages a new record for insertion.
  • Details: Validates against schema, handles large strings (blobs), and adds to the transaction's pending inserts.

DBEngine::update_record

  • Purpose: Stages updates to an existing record.
  • Details: Validates updates, manages blob references, and adds to pending updates.

DBEngine::delete_record

  • Purpose: Stages a record for deletion.
  • Details: Tracks blob references for cleanup and adds to pending deletes.

DBEngine::search_records

  • Purpose: Searches for records matching a query string.
  • Details: Supports case-insensitive search, uses the appropriate snapshot or committed state based on isolation level.

DBEngine::create_table_with_schema

  • Purpose: Stages creation of a new table with a specified schema.
  • Details: Adds to the transaction's pending table creates.

DBEngine::update_table_schema

  • Purpose: Stages a schema update for a table.
  • Details: Validates that the migration is safe and adds to pending schema updates.

DBEngine::load_from_disk

  • Purpose: Loads the database state from disk.
  • Details: Reads the TOC, loads all tables and records, reconstructs in-memory state.

DBEngine::save_to_disk

  • Purpose: Persists the current state to disk.
  • Details: Serializes all tables and records, writes to page store, updates TOC, flushes and syncs.

DBEngine::recover_from_crash

  • Purpose: Restores the database to a consistent state after a crash.
  • Details: Loads from disk, replays WAL, applies all valid transactions, persists state.

DBEngine::compact_database

  • Purpose: Compacts the database file, removing unused pages and reorganizing data.
  • Details: Writes all live data to a new file, replaces the old file, and reopens the page store.

REPL Shell (REPL)

  • Purpose: Interactive shell for database commands.
  • Supported Commands:
    • CREATE_TABLE <name> ...
    • INSERT INTO <table> ...
    • UPDATE <table> ...
    • DELETE FROM <table> ...
    • SEARCH_RECORDS FROM <table> MATCH <query>
    • GET_RECORDS FROM <table>
    • DROP_TABLE <name>
    • ALTER_TABLE <name> ...
  • Features:
    • Supports output formats: standard, table, JSON, etc.
    • Handles errors and transaction boundaries per command.
    • Useful for development, testing, and demos.

Error Handling & Recovery

  • All disk and transactional errors are surfaced as Result<T, String>.
  • On commit failure, a rollback is automatically attempted.
  • On startup, the engine attempts recovery using the WAL and disk state.

Extensibility

  • Indexes: Optional secondary indexes can be enabled via environment variable.
  • Custom Field Types: Extend ValueType and update schema validation logic.
  • Storage Backends: Implement new DatabaseType variants and adapt DBEngine initialization.

Testing

  • Extensive test suite under src/database/tests/ covers:
    • Basic operations
    • Transactional semantics
    • Persistence and recovery
    • Schema validation and migration
    • Large data and overflow page handling
    • Concurrency and rollback

Developer Notes

  • CommitGuard: RAII pattern to ensure rollback on drop if commit fails.
  • Isolation Levels: Each level has custom validation logic; review validate_transaction.
  • BlobStore: Always clean up blob references on update/delete.
  • Buffer Pool: Tune via DB_BUFFER_POOL_SIZE env variable.
  • Logging: Enable for debugging concurrency, persistence, or recovery issues.
  • Schema Evolution: Use can_migrate_from to validate safe schema changes.

Advanced Topics and Additional Details

Buffer Pool and Page Management

  • BufferPool: Implements an LRU cache for disk pages, reducing disk I/O and improving performance.
    • Pages are pinned/unpinned as they are accessed and modified.
    • Eviction policy ensures hot pages remain in memory.
    • Buffer pool size is configurable via environment variable.
  • PageStore: Handles allocation, reading, writing, and freeing of fixed-size pages on disk.
    • Supports overflow pages for large records or batches.
    • Ensures atomicity and durability of page writes.

Large Data and Blob Handling

  • BlobStore: Used for storing large strings or binary data that exceed the inline threshold.
    • Data is stored in separate files with reference counting.
    • Blob references are cleaned up on record deletion or update.
    • BlobStore is memory-mapped for efficient access and syncs to disk after changes.
    • Blob index file tracks all blob keys and their locations.

Indexing (Optional)

  • IndexManager: If enabled, maintains secondary indexes for fast lookups.
    • Indexes are updated on insert, update, and delete.
    • Can be extended to support custom index types or full-text search.

Schema Evolution and Migration

  • Schema Versioning: Each table schema has a version number.
  • Migration Safety: can_migrate_from checks for safe migrations (e.g., prevents dropping required fields without defaults, or changing types incompatibly).
  • Default Values: New required fields can be added if a default is provided; existing records are backfilled.

WAL Rotation and Compaction

  • WAL Rotation: WAL files are rotated after reaching a configurable threshold to prevent unbounded growth.
  • Database Compaction: Periodically, the database file can be compacted to reclaim space from deleted or obsolete pages.

Error Handling and Diagnostics

  • Error Propagation: All major operations return Result<T, String> for robust error handling.
  • Diagnostics: Warnings and errors are logged to stderr for troubleshooting.
  • Assertions and Invariants: Internal checks ensure data consistency and integrity.

Testing and Validation

  • Test Coverage: Unit, integration, and property-based tests cover all major features.
  • Test Utilities: Helpers for setting up test databases, cleaning up files, and simulating crash/recovery scenarios.
  • Performance Testing: Benchmark modules and stress tests for buffer pool, WAL, and blob store.

Extending the Engine

  • Adding New Field Types: Extend ValueType and update schema validation and serialization logic.
  • Custom Storage Backends: Implement new DatabaseType variants and adapt DBEngine initialization.
  • Custom REPL Commands: Extend the REPL parser and executor for new administrative or diagnostic commands.

Security and Data Integrity

  • Checksums: Data is checksummed before and after disk writes to detect corruption.
  • Atomicity: All disk writes are atomic at the page level; WAL ensures atomicity at the transaction level.
  • Crash Consistency: WAL and careful ordering of disk writes ensure no partial transactions are visible after a crash.

Performance Considerations

  • Batching: Record serialization and disk writes are batched for efficiency.
  • Parallelism: The engine is designed to allow concurrent transactions, subject to isolation and locking.
  • Tuning: Buffer pool size, WAL rotation threshold, and blob threshold can be tuned for workload characteristics.

Additional Considerations

Multi-threading and Concurrency

  • Thread Safety: The engine uses Arc, Mutex, and RwLock to ensure safe concurrent access to shared data structures.
  • Concurrent Transactions: Multiple transactions can be processed in parallel, subject to locking and isolation constraints.
  • Lock Granularity: Record-level locks minimize contention, but schema/table-level locks may be used for DDL operations.

Serialization and Deserialization

  • Bincode: All on-disk data (records, schemas, TOC) is serialized using bincode for compactness and speed.
  • Version Compatibility: Care is taken to support both old and new page formats for forward/backward compatibility.

Data Integrity and Consistency

  • Checksums: Used to verify data integrity after disk writes and during recovery.
  • Atomic Operations: Disk writes and WAL appends are performed atomically to prevent partial updates.
  • Consistency Checks: On startup and after recovery, the engine verifies that all tables and records are consistent with their schemas.

Maintenance and Operations

  • Backup and Restore: The engine can be stopped and files copied for backup; restore is as simple as replacing the files.
  • Monitoring: Logging can be enabled for monitoring transaction throughput, errors, and performance metrics.
  • Upgrades: Schema versioning and migration logic allow for safe upgrades without data loss.

Limitations and Future Work

  • Distributed Transactions: Currently, transactions are local to a single engine instance.
  • Query Language: The REPL supports a simple command language; SQL compatibility is a possible future enhancement.
  • Advanced Indexing: Only basic secondary indexes are supported; more advanced indexing (e.g., B-trees, full-text) can be added.
  • Encryption: Data-at-rest encryption is not yet implemented.

Storage Layer Deep Dive

Page Format and Layout

Kinesis uses a structured page format for optimal storage and retrieval:

┌─────────────────────────────────────────────┬──────────────────────┐
│                Header (40B)                 │     Data (16344B)    │
├──────┬──────────┬────────┬──────────┬───────┼──────────────────────┤
│ Type │   Next   │ Length │ Checksum │ Rsrvd │    Records/Data      │
│  1B  │    8B    │   4B   │    8B    │  19B  │                      │
└──────┴──────────┴────────┴──────────┴───────┴──────────────────────┘

Page Constants:

  • PAGE_SIZE: 16,384 bytes (16KB)
  • PAGE_HEADER_SIZE: 40 bytes
  • MAX_DATA_SIZE: 16,344 bytes (PAGE_SIZE - PAGE_HEADER_SIZE)

Header Fields:

  • Type (1 byte): Page type (0=regular, 1=overflow)
  • Next (8 bytes): Next page ID in chain (0 if last page)
  • Length (4 bytes): Valid data length in this page
  • Checksum (8 bytes): Data integrity verification hash
  • Reserved (19 bytes): Future use, zero-filled

Overflow Page Chains

Large records spanning multiple pages use linked chains:

┌─────────┐    ┌──────────┐    ┌─────────┐
│ Page 1  │───▶│ Page 2   │───▶│ Page 3  │
│(Header) │    │(Overflow)│    │ (Last)  │
│ Data... │    │ Data...  │    │ Data... │
└─────────┘    └──────────┘    └─────────┘

Table of Contents (TOC) Structure

#![allow(unused)]
fn main() {
struct TOC {
    table_locations: HashMap<String, Vec<u64>>,     // Table → Page IDs
    table_schemas: HashMap<String, TableSchema>,    // Table → Schema
    table_next_ids: HashMap<String, u64>,          // Table → Next Record ID
}
}

TOC Handling:

  • Small TOCs: Stored directly in page 0
  • Large TOCs: Stored in overflow chain with reference in page 0
  • Format: TOC_REF:<page_id> for overflow references

Memory Management and Performance

Buffer Pool Architecture

The buffer pool implements a sophisticated caching strategy:

#![allow(unused)]
fn main() {
// Configuration based on database type
DatabaseType::InMemory  => 10,000 pages  // No disk I/O overhead
DatabaseType::Hybrid    => 2,500 pages   // Balanced performance
DatabaseType::OnDisk    => 100 pages     // Minimal memory usage
}

LRU Eviction Policy:

  • Pages are ranked by access time
  • Dirty pages are flushed before eviction
  • Pin/unpin semantics prevent premature eviction
  • Configurable via DB_BUFFER_POOL_SIZE environment variable

Performance Characteristics

Operation Complexity

Operation          | Time Complexity | Notes
-------------------|-----------------|----------------------------------
Insert             | O(1) + schema   | Staging only, validation overhead
Update             | O(1) + schema   | In-place updates when possible
Delete             | O(1)            | Lazy deletion, cleanup on commit
Point Query        | O(1)            | Hash-based record lookup
Table Scan         | O(n)            | Linear scan through all records
Schema Migration   | O(n)            | All records validated/migrated
Transaction Commit | O(k)            | k = number of operations in tx

Memory Usage Patterns

  • Record Storage: ~50-100 bytes overhead per record
  • Transaction Tracking: ~200 bytes per active transaction
  • Page Cache: 4KB per cached page
  • Blob References: ~50 bytes per large string reference

Disk I/O Patterns

  • Sequential Writes: WAL, batch serialization, compaction
  • Random Reads: Buffer pool minimizes seeks for hot data
  • Bulk Operations: Chunked to prevent memory pressure
  • Checkpointing: Periodic full database sync

Advanced Configuration and Tuning

Environment Variables

# Core Performance Settings
export DB_BUFFER_POOL_SIZE=5000           # Pages in memory cache (default: varies by DB type)

# Database Engine Configuration
export DB_NAME=main_db                    # Database filename (default: main_db)
export DB_STORAGE_ENGINE=hybrid           # Storage type: memory, disk, hybrid (default: hybrid)
export DB_ISOLATION_LEVEL=serializable    # Transaction isolation level (default: serializable)
export DB_AUTO_COMPACT=true               # Automatic database compaction (default: true)
export DB_RESTORE_POLICY=recover_pending  # WAL recovery policy (default: recover_pending)

# Feature Toggles
export DB_INDEXING=true                   # Enable secondary indexes (if supported)

# Logging and Debugging
export RUST_LOG=kinesis_db=debug          # Enable debug logging

Database Storage Engines:

  • memory: Purely in-memory, non-persistent
  • disk: Fully persistent, disk-based
  • hybrid: In-memory caching with disk persistence (recommended)

Isolation Levels:

  • read_uncommitted: No isolation, maximum performance
  • read_committed: Prevents dirty reads
  • repeatable_read: Consistent snapshot within transaction
  • serializable: Full isolation with conflict detection

Restore Policies:

  • discard: Ignore WAL on startup
  • recover_pending: Recover only uncommitted transactions (default)
  • recover_all: Recover all transactions from WAL

Performance Tuning Guidelines

For High-Throughput Workloads

# Optimize for write performance
export DB_STORAGE_ENGINE=hybrid
export DB_BUFFER_POOL_SIZE=10000
export DB_ISOLATION_LEVEL=read_committed  # Lower isolation for better concurrency
export DB_AUTO_COMPACT=false              # Manual compaction only

For Memory-Constrained Environments

# Minimize memory usage
export DB_STORAGE_ENGINE=disk
export DB_BUFFER_POOL_SIZE=500
export DB_AUTO_COMPACT=true

For Read-Heavy Workloads

# Optimize for read performance
export DB_STORAGE_ENGINE=hybrid
export DB_BUFFER_POOL_SIZE=20000
export DB_INDEXING=true                   # If available
export DB_ISOLATION_LEVEL=repeatable_read # Good balance of consistency and performance

For Development and Testing

# Fast, non-persistent setup
export DB_STORAGE_ENGINE=memory
export DB_BUFFER_POOL_SIZE=1000
export DB_ISOLATION_LEVEL=read_uncommitted
export RUST_LOG=kinesis_db=trace

Debugging and Diagnostics

Logging and Monitoring

Enable comprehensive logging for troubleshooting:

# Full debug logging
export RUST_LOG=kinesis_db=trace,kinesis_db::storage=debug

# Component-specific logging
export RUST_LOG=kinesis_db::transaction=debug,kinesis_db::wal=info

Log Categories:

  • kinesis_db::transaction: Transaction lifecycle events
  • kinesis_db::storage: Page I/O and buffer pool activity
  • kinesis_db::wal: Write-ahead log operations
  • kinesis_db::blob: Large data storage operations

Common Issues and Solutions

Transaction Timeouts

Symptoms: Transaction timeout errors during commit Causes: Long-running transactions, deadlocks, excessive lock contention Solutions:

#![allow(unused)]
fn main() {
// Increase timeout in transaction config
let tx_config = TransactionConfig {
    timeout_secs: 300,     // 5 minutes
    max_retries: 10,
    deadlock_detection_interval_ms: 100,
};
}

Memory Pressure

Symptoms: Slow performance, frequent page evictions, OOM errors Causes: Insufficient buffer pool, large transactions, memory leaks Solutions:

  • Increase DB_BUFFER_POOL_SIZE
  • Batch large operations
  • Use streaming for bulk imports
  • Monitor RSS with ps or system monitoring

Disk Space Issues

Symptoms: No space left on device, WAL growth Causes: WAL accumulation, blob storage growth, failed compaction Solutions:

# Manual WAL cleanup (when safe)
find /path/to/wal -name "*.log.*" -delete

# Force database compaction
echo "COMPACT DATABASE;" | your_repl_tool

# Monitor disk usage
du -sh /path/to/database/

Data Corruption

Symptoms: Checksum mismatch warnings, deserialization errors Causes: Hardware issues, incomplete writes, software bugs Solutions:

  • Restore from backup
  • Check hardware (disk, memory)
  • Enable more frequent syncing
  • Verify file permissions

Diagnostic Tools and Commands

Built-in REPL Diagnostics

-- Check database statistics
GET_TABLES;

-- Verify table schema
GET_TABLE users;

-- Examine specific records
GET_RECORD FROM users 1;

-- Search for patterns
SEARCH_RECORDS FROM users MATCH "corrupted";

File System Analysis

# Check file sizes and growth
ls -lh data/*.pages data/*.log data/*.blobs*

# Verify file integrity
file data/*.pages  # Should show "data"

Memory Analysis

# Monitor memory usage during operations
watch 'ps aux | grep your_process'

# Check for memory leaks
valgrind --tool=memcheck --leak-check=full your_binary

# Analyze heap usage
heaptrack your_binary

Development Workflow and Best Practices

Setting Up Development Environment

# Clone and setup
git clone <repository>
cd kinesis-api

# Install dependencies
cargo build

# Setup test environment
mkdir -p data/tests
export RUST_LOG=debug
export DB_BUFFER_POOL_SIZE=1000

Running Tests

# Full test suite
cargo test database::tests

# Specific test categories
cargo test database::tests::overflow_pages    # Storage tests
cargo test database::tests::concurrency       # Transaction tests
cargo test database::tests::schema           # Schema validation tests
cargo test database::tests::benchmark        # Performance tests

# Run with logging
RUST_LOG=debug cargo test database::tests::basic_operations

Adding New Features

1. Extending ValueType for New Data Types

#![allow(unused)]
fn main() {
// 1. Add to ValueType enum
#[derive(Debug, Clone, PartialEq, Serialize, Deserialize)]
pub enum ValueType {
    // ...existing variants...
    Decimal(rust_decimal::Decimal),
    Json(serde_json::Value),
}

// 2. Update FieldType enum
#[derive(Debug, Clone, PartialEq, Serialize, Deserialize)]
pub enum FieldType {
    // ...existing variants...
    Decimal,
    Json,
}

// 3. Add validation logic
impl FieldConstraint {
    pub fn validate_value(&self, value: &ValueType) -> Result<(), String> {
        match (&self.field_type, value) {
            // ...existing cases...
            (FieldType::Decimal, ValueType::Decimal(_)) => Ok(()),
            (FieldType::Json, ValueType::Json(_)) => Ok(()),
            _ => Err("Type mismatch".to_string()),
        }
    }
}

// 4. Add display formatting
impl fmt::Display for ValueType {
    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
        match self {
            // ...existing cases...
            ValueType::Decimal(d) => write!(f, "{}", d),
            ValueType::Json(j) => write!(f, "{}", j),
        }
    }
}
}

2. Adding New REPL Commands

#![allow(unused)]
fn main() {
// 1. Add command variant
#[derive(Debug, Clone)]
pub enum Command {
    // ...existing commands...
    ExplainQuery { table: String, query: String },
    ShowIndexes { table: Option<String> },
}

// 2. Add parser case
fn parse_commands(&self, input: &str) -> Result<Vec<Command>, String> {
    // ...existing parsing...
    match tokens[0].to_uppercase().as_str() {
        // ...existing cases...
        "EXPLAIN" => self.parse_explain(&tokens[1..])?,
        "SHOW" if tokens.len() > 1 && tokens[1].to_uppercase() == "INDEXES" => {
            self.parse_show_indexes(&tokens[2..])?
        },
        _ => return Err(format!("Unknown command: {}", tokens[0])),
    }
}

// 3. Add executor case
fn execute(&mut self, input: &str, format: Option<OutputFormat>) -> Result<String, String> {
    // ...existing execution...
    match command {
        // ...existing cases...
        Command::ExplainQuery { table, query } => {
            self.explain_query(&table, &query)
        },
        Command::ShowIndexes { table } => {
            self.show_indexes(table.as_deref())
        },
    }
}
}

3. Implementing New Storage Backends

#![allow(unused)]
fn main() {
// 1. Add database type
#[derive(Debug, Clone, PartialEq, Serialize, Deserialize)]
pub enum DatabaseType {
    // ...existing variants...
    Distributed { nodes: Vec<String> },
    Compressed { algorithm: CompressionType },
}

// 2. Update engine initialization
impl DBEngine {
    pub fn new(db_type: DatabaseType, /* other params */) -> Self {
        let (page_store, blob_store) = match &db_type {
            // ...existing cases...
            DatabaseType::Distributed { nodes } => {
                (Some(DistributedPageStore::new(nodes)?),
                 Some(DistributedBlobStore::new(nodes)?))
            },
            DatabaseType::Compressed { algorithm } => {
                (Some(CompressedPageStore::new(algorithm)?),
                 Some(CompressedBlobStore::new(algorithm)?))
            },
        };

        // ...rest of initialization...
    }
}
}

Code Style and Standards

#![allow(unused)]
fn main() {
// Use descriptive names and comprehensive documentation
/// Commits a transaction, applying all staged changes atomically.
///
/// This method validates the transaction according to its isolation level,
/// acquires necessary locks, applies changes to the in-memory database,
/// logs to WAL (if applicable), and persists to disk.
///
/// # Arguments
/// * `tx` - The transaction to commit
///
/// # Returns
/// * `Ok(())` if the transaction was committed successfully
/// * `Err(String)` describing the failure reason
///
/// # Examples
/// ```rust
/// let mut tx = engine.begin_transaction();
/// engine.insert_record(&mut tx, "users", record)?;
/// engine.commit(tx)?;
/// ```
pub fn commit(&mut self, tx: Transaction) -> Result<(), String> {
    CommitGuard::new(self, tx).commit()
}

// Use proper error handling with context
match self.validate_transaction(&tx) {
    Ok(()) => { /* continue */ }
    Err(e) => return Err(format!("Transaction validation failed: {}", e)),
}

// Prefer explicit types for complex generics
let buffer_pool: Arc<Mutex<BufferPool>> = Arc::new(Mutex::new(
    BufferPool::new(buffer_pool_size, db_type)
));
}

Testing Guidelines

Unit Tests

#![allow(unused)]
fn main() {
#[test]
fn test_transaction_isolation() {
    let mut engine = setup_test_db("isolation_test", IsolationLevel::Serializable);

    // Test specific isolation behavior
    let mut tx1 = engine.begin_transaction();
    let mut tx2 = engine.begin_transaction();

    // Simulate concurrent operations
    engine.insert_record(&mut tx1, "test", record1)?;
    engine.insert_record(&mut tx2, "test", record2)?;

    // Verify isolation guarantees
    assert!(engine.commit(tx1).is_ok());
    assert!(engine.commit(tx2).is_err()); // Should conflict
}
}

Integration Tests

#![allow(unused)]
fn main() {
#[test]
fn test_crash_recovery_scenario() {
    let db_path = "test_crash_recovery";

    // Phase 1: Create initial state
    {
        let mut engine = create_engine(db_path);
        perform_operations(&mut engine);
        // Simulate crash - don't clean shutdown
    }

    // Phase 2: Recovery
    {
        let mut engine = create_engine(db_path); // Should recover automatically
        verify_recovered_state(&engine);
    }
}
}

Performance Tests

#![allow(unused)]
fn main() {
#[test]
fn test_bulk_insert_performance() {
    let mut engine = setup_test_db("perf_test", IsolationLevel::ReadCommitted);

    let start = Instant::now();
    let mut tx = engine.begin_transaction();

    for i in 0..10_000 {
        let record = create_test_record(i, &format!("Record {}", i));
        engine.insert_record(&mut tx, "perf_test", record)?;
    }

    engine.commit(tx)?;
    let duration = start.elapsed();

    println!("Bulk insert of 10k records: {:?}", duration);
    assert!(duration < Duration::from_secs(10)); // Performance threshold
}
}

FAQ and Troubleshooting

  • Q: Why is my data not persisted?
    • A: Ensure you are not using the InMemory backend. Only OnDisk and Hybrid persist data.
  • Q: How do I recover from a crash?
    • A: On startup, the engine automatically loads from disk and replays the WAL.
  • Q: How do I enable indexes?
    • A: Set the DB_INDEXING environment variable to true before starting the engine.
  • Q: How do I tune performance?
    • A: Adjust DB_BUFFER_POOL_SIZE, WAL rotation threshold, and use the appropriate backend for your workload.

Glossary

  • MVCC: Multi-Version Concurrency Control, enables snapshot isolation.
  • WAL: Write-Ahead Log, ensures durability and crash recovery.
  • TOC: Table of Contents, metadata page mapping tables to page chains.
  • Blob: Large binary or string data stored outside the main page store.

For further details, refer to the source code and inline documentation in each module.


References

  • See the src/database/ directory for implementation details.
  • Consult module-level Rust docs for API usage and extension points.
  • For design discussions and roadmap, refer to the project wiki or issue tracker.

This document is intended to be comprehensive. If you find any missing details or have suggestions for improvement, please update this file or open a new ticket.


Build APIs with Kinesis API

Welcome to the Kinesis API tutorials section. These step-by-step guides will help you learn how to build powerful, robust APIs using the Kinesis API platform. Whether you're new to API development or an experienced developer looking to harness the full potential of Kinesis API, these tutorials will provide practical, hands-on experience.

About These Tutorials

Each tutorial in this section:

  • Walks through a complete, real-world example
  • Includes step-by-step instructions with screenshots
  • Explains core concepts as they're introduced
  • Provides working code that you can adapt for your own projects
  • Builds skills progressively from basic to advanced

Prerequisites

Before starting these tutorials, you should:

  • Have Kinesis API installed and running (see Installation)
  • Be familiar with basic API concepts
  • Understand HTTP methods (GET, POST, PUT, DELETE)
  • Have completed the initial setup of Kinesis API

Available Tutorials

Building a Simple Counter App

Difficulty: Beginner

This tutorial walks you through creating a basic counter API that demonstrates fundamental Kinesis API concepts. You'll learn how to:

  • Create a new project and collection
  • Define structures for your data
  • Build API routes for retrieving and updating the counter
  • Test your API using the Playground
  • Understand the basics of the X Engine

This is the perfect starting point if you're new to Kinesis API.

Implementing JWT Authentication

Difficulty: Intermediate

This tutorial guides you through implementing secure user authentication using JSON Web Tokens (JWT). You'll learn how to:

  • Create a user authentication system
  • Generate and validate JWTs
  • Secure your API routes

This tutorial builds on the fundamentals and introduces more advanced security concepts.

Using Loops to Filter Data

Difficulty: Intermediate

This tutorial teaches you how to use loops to process and filter data. You'll learn how to:

  • Fetch a list of data objects
  • Loop through the list
  • Use conditional logic inside a loop
  • Filter items based on URL parameters
  • Return a filtered list of data

This is a great tutorial for understanding how to build more dynamic and complex API logic.

Approaching These Tutorials

We recommend following these tutorials in order, as each one builds on concepts introduced in previous tutorials. However, if you're already familiar with Kinesis API basics, you can jump directly to more advanced tutorials.

As you work through the tutorials:

  1. Take your time: Understand each step before moving to the next
  2. Experiment: Try modifying examples to see how things work
  3. Refer to documentation: Use the API Reference and other documentation when needed
  4. Troubleshoot: If something doesn't work, check for typos or review earlier steps

What You'll Learn

By completing these tutorials, you'll gain practical experience in:

  • Designing and implementing APIs with Kinesis API
  • Modeling data effectively using structures
  • Creating secure, efficient API routes using the X Engine
  • Testing and debugging your APIs
  • Implementing common patterns like authentication, data validation, and error handling

Getting Help

If you encounter difficulties while following these tutorials:

Ready to Begin?

Start with Building a Simple Counter App to begin your journey with Kinesis API!

Building a Simple Counter App

This tutorial will guide you through building a simple counter API with Kinesis API. You'll create an API endpoint that stores and increments a count value with each request. This is a perfect first project to help you understand the fundamentals of Kinesis API.

Prefer video tutorials? You can follow along with our YouTube walkthrough of this same project.

Prerequisites

Before you begin, you need:

  1. Access to a Kinesis API instance:
  2. A user account with ADMIN or ROOT privileges
  3. Basic understanding of REST APIs and HTTP methods

1. Creating a Project

First, we'll create a project to organize our counter API:

  1. Log in to your Kinesis API instance
  2. Navigate to the Projects page from the main menu
  3. Click "Create a new project" to open the project creation modal
  4. Fill in the following details:
    • Name: "Counter Project"
    • ID: "counter"
    • Description: "A dummy starter project for a counter." (or anything else you want)
    • API Path: "/counter" (this will be the base URL path for all routes in this project) Create Project
  5. Click "Create" to save your project

2. Creating a Collection

Next, we'll create a collection to store our counter data:

  1. From your newly created project page, click "Create New" on the "Collections" section
  2. Fill in the following details:
    • Name: "count"
    • ID: "count"
    • Description: "To store the actual count object." (or anything else you want) Create Collection
  3. Click "Create" to save your collection

Project Page

3. Creating Structures

Now we'll create some structures (fields) to store our counter value:

  1. From the "Count" collection page, locate the "Structures" section
  2. Click "Create New" to add a structure
  3. Fill in the following details:
    • Name: "id"
    • ID: "id"
    • Description: "" (leave blank or insert anything else)
    • Type: "INTEGER"
    • Min: "0"
    • Max: "1000"
    • Default Value: "0"
    • Required: Check this box
    • Unique: Check this box Create ID Structure
  4. Click "Create" to save the structure
  5. Click "Create New" to add another structure
  6. Fill in the following details:
    • Name: "value"
    • ID: "value"
    • Description: "" (leave blank or insert anything else)
    • Type: "INTEGER"
    • Min: "0"
    • Max: "999999999"
    • Default Value: "0" (to start the counter at zero)
    • Required: Check this box Create Value Structure
  7. Click "Create" to save the structure

Collection Page

4. Creating Data

Now we'll create an initial data object with our counter:

  1. Navigate to the "Data" section in the top navigation bar
  2. Select your project and then the "Count" collection
  3. Click "Create New" to create a data object
  4. Add a nickname (optional)
  5. For the "id" field, enter: "0"
  6. For the "value" field, enter: "0" Create Data
  7. Click "Create" to save your data object

Data Page

5. Creating a Route

Now we'll create a route that increments the counter:

  1. Navigate to Routes in the top navigation bar
  2. Select your project
  3. Click "Create New" to create a route
  4. Fill in the route details:
    • Route ID: "GET_COUNTER"
    • Route Path: "/get"
    • HTTP Method: "GET"
  5. The "JWT Authentication", "URL Parameters" and "Body" sections don't need to be modified Create Route

Building the Route Logic in the Flow Editor

The Flow Editor is where we define what happens when our route is called. We'll build a flow that:

  1. Fetches the current counter value
  2. Increments it by 1
  3. Updates the stored value
  4. Returns the new value

Follow these steps:

  1. Add a FETCH block:

    • Drag a FETCH block onto the canvas and connect the START node to it
    • Fill in the following details:
      • Local Name: "_allCounts"
      • Reference Collection: "count"

    Fetch Block

  2. Add a PROPERTY block:

    • Drag a PROPERTY block onto the canvas and connect the FETCH block to it
    • Fill in the following details:
      • Local Name: "_currentCount"
      • Property Apply: "GET_INDEX"
      • Additional: "0"
    • As for the data section, fill in the following details:
      • Reference: Check this box
      • Type: "Array"
      • Data: "_allCounts"

    Property Block

  3. Add another PROPERTY block:

    • Drag another PROPERTY block onto the canvas and connect the previous PROPERTY block to it
    • Fill in the following details:
      • Local Name: "currentCountValue"
      • Property Apply: "GET_PROPERTY"
      • Additional: "value"
    • As for the data section, fill in the following details:
      • Reference: Check this box
      • Type: "Other"
      • Data: "_currentCount"

    Property Block

  4. Add an ASSIGNMENT block:

    • Drag an ASSIGNMENT block onto the canvas and connect the PROPERTY block to it
    • Fill in the following details:
      • Local Name: "updatedCount"
    • Add an "operation" submodule with 2 operands, and fill in the following details:
      • First Operand:
        • Reference: Check this box
        • Type: "Integer"
        • Data: "currentCountValue"
      • Second Operand:
        • Reference: Leave unchecked
        • Type: "Integer"
        • Data: "1"
    • Operation Type: "Addition"

    Assignment Block

  5. Add an UPDATE block:

    • Drag an UPDATE block onto the canvas and connect the ASSIGNMENT block to it
    • Fill in the following details:
      • Reference Collection: "count"
      • Reference Property: "value"
    • As for the set section, fill in the following details:
      • Reference: Check this box
      • Type: "Integer"
      • Data: "updatedCount"
    • Add an "update target" submodule with 1 operand, and fill in the following details:
      • Field: "id"
      • Operand:
        • Reference: Leave unchecked
        • Type: "Integer"
        • Data: "0"
        • Condition Type: "Equal To"
    • Set: Check this box
    • Save: Check this box

    Update Block

  6. Add a RETURN block:

    • Drag a RETURN block onto the canvas and connect the UPDATE block to it
    • Add an "object pair" submodule
    • Fill in the following details:
      • id: "count"
      • data:
        • Reference: Check this box
        • Type: "Integer"
        • Data: "updatedCount"

    Return Block

  7. Ensure that your Flow Editor looks like the following: Flow Editor

  8. Click "Create" to save your route

Route Page

6. Testing via Playground

Now let's test our counter API:

  1. Navigate to Playground in the top navigation bar
  2. Select your project Playground Page
  3. Click on the "GET_COUNTER" route Playground Project Page
  4. Click "Send" to make a request to your API
  5. You should see a response like:
    {
      "count": 1
    }
    
  6. Click "Send" again and you should see the counter increment:
    {
      "count": 2
    }
    

Playground Request Page

Congratulations!

You've successfully built a simple counter API using Kinesis API! Here's what you've accomplished:

  • Created a project to organize your API
  • Set up a collection to store data
  • Defined some structures (fields) for your counter
  • Created a data object with an initial value
  • Built a route that increments the counter
  • Tested your API and verified it works

Next Steps

Now that you understand the basics, here are some ways to extend this project:

  1. Add a reset route: Create a new route that resets the counter to zero
  2. Add custom increment: Modify the route to accept a parameter that specifies how much to increment by
  3. Add multiple counters: Create multiple data objects and update your route to increment a specific counter
  4. Add authentication: Require a token to increment the counter

Continuous Improvement

Note: Kinesis API is continuously evolving based on user feedback. As users test and provide suggestions, the platform will become simpler, more intuitive, and easier to use. This tutorial will be updated regularly to reflect improvements in the user experience and new features.

We value your feedback! If you have suggestions for improving this tutorial or the Kinesis API platform, please reach out through our contact page or consider raising a new ticket.

Implementing JWT Authentication

This tutorial will guide you through implementing a complete JWT (JSON Web Token) authentication system using Kinesis API. You'll create login functionality that generates tokens and a verification endpoint that validates those tokens, establishing a secure authentication flow for your APIs.

Prefer video tutorials? You can follow along with our YouTube walkthrough of this same project.

Prerequisites

Before you begin, you need:

  1. Access to a Kinesis API instance:
  2. A user account with ADMIN or ROOT privileges
  3. Basic understanding of:
    • REST APIs and HTTP methods
    • Authentication concepts
    • JSON Web Tokens (JWT)

What is JWT Authentication?

JWT (JSON Web Token) is an open standard for securely transmitting information between parties. In the context of authentication:

  1. A user logs in with their credentials
  2. The server validates the credentials and generates a signed JWT
  3. The client stores this JWT and sends it with subsequent requests
  4. The server verifies the JWT's signature to authenticate the user

This stateless approach eliminates the need for server-side session storage, making it ideal for APIs.

1. Creating the Authentication Project

First, let's create a project to organize our authentication API:

  1. Log in to your Kinesis API instance
  2. Navigate to the Projects page from the main menu
  3. Click "Create New" to open the project creation modal
  4. Fill in the following details:
    • Name: "Authentication"
    • ID: "auth"
    • Description: "Simple project to test JWT authentication." (or anything else you want)
    • API Path: "/auth" (this will be the base URL path for all routes in this project) Create Project
  5. Click "Create" to save your project

2. Creating the Accounts Collection

Next, let's create a collection to store user accounts:

  1. From your newly created project page, click "Create New" on the "Collections" section
  2. Fill in the following details:
    • Name: "Accounts"
    • ID: "accounts"
    • Description: "To store user accounts." (or anything else you want) Create Collection
  3. Click "Create" to save your collection

Project Page

3. Creating Structures

Now we'll create structures (fields) to store user information:

UID Structure

  1. From the "Accounts" collection page, locate the "Structures" section
  2. Click "Create New" to add a structure
  3. Fill in the following details:
    • Name: "uid"
    • ID: "uid"
    • Description: "" (leave blank or insert anything else)
    • Type: "INTEGER"
    • Min: "0"
    • Max: "1000"
    • Default Value: "0"
    • Required: Check this box
    • Unique: Check this box Create UID Structure
  4. Click "Create" to save the structure

Username Structure

  1. Click "Create New" again to add another structure
  2. Fill in the following details:
    • Name: "username"
    • ID: "username"
    • Description: "" (leave blank or insert anything else)
    • Type: "TEXT"
    • Min: "4"
    • Max: "100"
    • Required: Check this box
    • Unique: Check this box Create Username Structure
  3. Click "Create" to save the structure

Password Structure

  1. Click "Create New" again to add the final structure
  2. Fill in the following details:
    • Name: "password"
    • ID: "password"
    • Description: "" (leave blank or insert anything else)
    • Type: "PASSWORD"
    • Min: "8"
    • Max: "100"
    • Required: Check this box Create Password Structure
  3. Click "Create" to save the structure

Collection Page

4. Creating a Test User Account

Let's create a test user to demonstrate our authentication system:

  1. Navigate to the "Data" section in the top navigation bar
  2. Select your "Authentication" project and then the "Accounts" collection
  3. Click "Create New" to create a data object
  4. Add a nickname (optional)
  5. Fill in the fields:
    • uid: "0" (this will be our unique identifier)
    • username: "john_doe"
    • password: "password123" (in a real system, you'd use a strong password) Create Data
  6. Click "Create" to save your data object

Data Page

5. Creating the Login Route

Now we'll create a route that validates credentials and generates a JWT token:

  1. Navigate to Routes in the top navigation bar
  2. Select your "Authentication" project
  3. Click "Create New" to create a route
  4. Fill in the route details:
    • Route ID: "LOGIN"
    • Route Path: "/login"
    • HTTP Method: "POST"
  5. Add 2 parameters to the "Body" section:
    • username: "STRING"
    • password: "STRING"
  6. The "JWT Authentication" and "URL Parameters" sections don't need to be modified

Create Route

Building the Login Route Logic

In the Flow Editor, we'll create a flow that:

  1. Accepts username and password from the request body
  2. Validates these against our stored accounts
  3. Generates a JWT token if credentials are valid
  4. Returns an error if credentials are invalid

Follow these steps:

  1. Add a FETCH block:

    • Drag a FETCH block onto the canvas and connect the START node to it
    • Fill in the following details:
      • Local Name: "_accounts"
      • Reference Collection: "accounts"

    Fetch Block

  2. Add a FILTER block:

    • Drag a FILTER block onto the canvas and connect the FETCH block to it
    • Fill in the following details:
      • Local Name: "_matchingAccounts"
      • Reference Variable: "_accounts"
      • Reference Property: "username"
    • Add a "filter" submodule and fill in the following details:
      • Operation Type: "Equal To"
      • Operand:
        • Reference: Check this box
        • Type: "String"
        • Data: "username"

    Filter Block

  3. Add a PROPERTY block:

    • Drag a PROPERTY block onto the canvas and connect the FILTER block to it
    • Fill in the following details:
      • Local Name: "_matchingAccountsLength"
      • Property Apply: "LENGTH"
    • As for the data section, fill in the following details:
      • Reference: Check this box
      • Type: "Array"
      • Data: "_matchingAccounts"

    Property Block

  4. Add a CONDITION block:

    • Drag a CONDITION block onto the canvas and connect the PROPERTY block to it
    • Fill in the following details:
      • Action: "Fail"
      • Fail Object Status: "404"
      • Fail Object Message: "User not found"
    • Add a "condition" submodule with 2 operands, and fill in the following details:
      • First Operand:
        • Reference: Check this box
        • Type: "Integer"
        • Data: "_matchingAccountsLength"
      • Second Operand:
        • Reference: Leave unchecked
        • Type: "Integer"
        • Data: "1"
      • Condition Type: "Less than"

    Condition Block

  5. Add a PROPERTY block:

    • Drag a PROPERTY block onto the canvas and connect the CONDITION block to it
    • Fill in the following details:
      • Local Name: "_foundAccount"
      • Property Apply: "GET_FIRST"
    • As for the data section, fill in the following details:
      • Reference: Check this box
      • Type: "Array"
      • Data: "_matchingAccounts"

    Property Block

  6. Add a PROPERTY block:

    • Drag a PROPERTY block onto the canvas and connect the previous PROPERTY block to it
    • Fill in the following details:
      • Local Name: "_savedPassword"
      • Property Apply: "GET_PROPERTY"
      • Additional: "password"
    • As for the data section, fill in the following details:
      • Reference: Check this box
      • Type: "Other"
      • Data: "_foundAccount"

    Property Block

  7. Add a CONDITION block:

    • Drag a CONDITION block onto the canvas and connect the PROPERTY block to it
    • Fill in the following details:
      • Action: "Fail"
      • Fail Object Status: "401"
      • Fail Object Message: "Invalid Password"
    • Add a "condition" submodule with 2 operands, and fill in the following details:
      • First Operand:
        • Reference: Check this box
        • Type: "String"
        • Data: "password"
      • Second Operand:
        • Reference: Check this box
        • Type: "String"
        • Data: "_savedPassword"
      • Condition Type: "Not equal to"

    Condition Block

  8. Add a PROPERTY block:

    • Drag a PROPERTY block onto the canvas and connect the CONDITION block to it
    • Fill in the following details:
      • Local Name: "_uid"
      • Property Apply: "GET_PROPERTY"
      • Additional: "uid"
    • As for the data section, fill in the following details:
      • Reference: Check this box
      • Type: "Other"
      • Data: "_foundAccount"

    Property Block

  9. Add a FUNCTION block:

    • Drag a FUNCTION block onto the canvas and connect the PROPERTY block to it
    • Fill in the following details:
      • Local Name: "_jwt"
      • Function: "GENERATE_JWT_TOKEN"
    • Add a "parameter" submodule and fill in the following details:
      • Reference: Check this box
      • Type: "Integer"
      • Data: "_uid"

    Function Block

  10. Add a RETURN block:

    • Drag a RETURN block onto the canvas and connect the FUNCTION block to it
    • Add an "object pair" submodule
    • Fill in the following details:
      • id: "uid"
      • data:
        • Reference: Check this box
        • Type: "Integer"
        • Data: "_uid"
    • Add another "object pair" submodule
    • Fill in the following details:
      • id: "jwt"
      • data:
        • Reference: Check this box
        • Type: "String"
        • Data: "_jwt"

    Return Block

  11. Ensure that your Flow Editor looks like the following: Flow Editor

  12. Click "Create" to save your route

Route Page

6. Creating the Verify Route

Now let's create a route that verifies JWT tokens:

  1. Navigate back to Routes in the top navigation bar
  2. Select your "Authentication" project
  3. Click "Create New" to create a route
  4. Fill in the route details:
    • Route ID: "VERIFY"
    • Route Path: "/verify"
    • HTTP Method: "GET"
  5. Fill in the details for the "JWT Authentication" section:
    • Active: Check this box
    • Field: "uid"
    • Reference Collection: "accounts"
  6. Set the "delimiter" to be "&" for the "URL Parameters" section
  7. Add 1 parameter to the "URL Parameters" section:
    • uid: "INTEGER"
  8. The "Body" section doesn't need to be modified

Create Route

Building the Verification Route Logic

In the Flow Editor, we'll create a flow that:

  1. Extracts the JWT token from the Authorization header
  2. Verifies the token's signature and expiration
  3. Returns the decoded payload if valid
  4. Returns an error if invalid

Follow these steps:

  1. Add a FETCH block:

    • Drag a FETCH block onto the canvas and connect the START node to it
    • Fill in the following details:
      • Local Name: "_accounts"
      • Reference Collection: "accounts"

    Fetch Block

  2. Add a FILTER block:

    • Drag a FILTER block onto the canvas and connect the FETCH block to it
    • Fill in the following details:
      • Local Name: "_matchingAccounts"
      • Reference Variable: "_accounts"
      • Reference Property: "uid"
    • Add a "filter" submodule and fill in the following details:
      • Operation Type: "Equal To"
      • Operand:
        • Reference: Check this box
        • Type: "Integer"
        • Data: "uid"

    Filter Block

  3. Add a PROPERTY block:

    • Drag a PROPERTY block onto the canvas and connect the FILTER block to it
    • Fill in the following details:
      • Local Name: "_foundAccount"
      • Property Apply: "GET_FIRST"
    • As for the data section, fill in the following details:
      • Reference: Check this box
      • Type: "Array"
      • Data: "_matchingAccounts"

    Property Block

  4. Add a PROPERTY block:

    • Drag a PROPERTY block onto the canvas and connect the previous PROPERTY block to it
    • Fill in the following details:
      • Local Name: "_username"
      • Property Apply: "GET_PROPERTY"
      • Additional: "username"
    • As for the data section, fill in the following details:
      • Reference: Check this box
      • Type: "Other"
      • Data: "_foundAccount"

    Property Block

  5. Add a RETURN block:

    • Drag a RETURN block onto the canvas and connect the PROPERTY block to it
    • Add an "object pair" submodule
    • Fill in the following details:
      • id: "message"
      • data:
        • Reference: Leave unchecked
        • Type: "String"
        • Data: "Authentication Succeeded!"
    • Add another "object pair" submodule
    • Fill in the following details:
      • id: "username"
      • data:
        • Reference: Check this box
        • Type: "String"
        • Data: "_username"

    Return Block

  6. Ensure that your Flow Editor looks like the following: Flow Editor

  7. Click "Create" to save your route

Route Page

7. Testing the Authentication System

Now let's test our JWT authentication system using the Playground:

Testing the Login Route

  1. Navigate to Playground in the top navigation bar
  2. Select your "Authentication" project Playground Page
  3. Click on the "LOGIN" route Playground Project Page
  4. Set the request body to:
    {
      "username": "john_doe",
      "password": "password123"
    }
    
  5. Click "Send" to make a request to your API
  6. You should see a response like:
    {
      "uid": 0,
      "jwt": "eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzUxMiJ9..."
    }
    
  7. Copy the generated JWT token and memorize the value of the uid

Playground Request Page

Testing Edge Cases

Let's test some edge cases to ensure our authentication system is robust:

  1. Invalid Username

    • Try the login route with an invalid username:
      {
        "username": "wronguser",
        "password": "password123"
      }
      
    • You should receive a 404 error response:
      {
        "message": "User not found",
        "status": 404
      }
      
  2. Invalid Password

    • Try the login route with an incorrect password:
      {
        "username": "john_doe",
        "password": "wrongpassword"
      }
      
    • You should receive a 401 error response:
      {
        "message": "Invalid Password",
        "status": 401
      }
      

Testing the Verify Route

  1. Navigate to Playground in the top navigation bar
  2. Select your "Authentication" project Playground Page
  3. Click on the "VERIFY" route Playground Project Page
  4. Paste the JWT token that you copied from the previous section in the "Authorization" field
  5. Enter the value "0" for the "uid" field in the "URL Parameters" section
  6. Click "Send" to make a request to your API
  7. You should see a response like:
    {
      "message": "Authentication Succeeded!",
      "username": "john_doe"
    }
    

Playground Request Page

Testing Edge Cases

Let's test some edge cases to ensure our authentication system is robust:

  1. No Authorization Header / Invalid Token

    • Try the verify route without an Authorization header or by setting it to a random value
    • You should receive a 500 error response:
      {
        "message": "Error: Failed decoding JWT (InvalidToken)",
        "status": 500
      }
      
  2. Wrong UID

    • Try the verify route with a different value for the "uid" field
    • You should receive a 403 error response:
      {
        "message": "Error: Incorrect uid",
        "status": 403
      }
      

Congratulations!

You've successfully implemented a complete JWT authentication system using Kinesis API! Here's what you've accomplished:

  • Created a project and collection to store user accounts
  • Set up the necessary data structures for authentication
  • Built a login route that validates credentials and generates JWT tokens
  • Created a verification route that validates tokens
  • Tested both successful authentication flows and edge cases

Next Steps

To build on this authentication system, you could:

  1. Add User Registration: Create a route that allows new users to register
  2. Add Role-Based Access Control: Extend the JWT payload to include user roles for authorization
  3. Create Protected Routes: Build additional routes that require valid authentication
  4. Add Refresh Tokens: Implement a token refresh mechanism for longer sessions

Continuous Improvement

Note: Kinesis API is continuously evolving based on user feedback. As users test and provide suggestions, the platform will become simpler, more intuitive, and easier to use. This tutorial will be updated regularly to reflect improvements in the user experience and new features. A dedicated "AUTH" block is planned for the future to dramatically simplify the process of adding authentication to your APIs.

We value your feedback! If you have suggestions for improving this tutorial or the Kinesis API platform, please reach out through our contact page or consider raising a new ticket.

Using Loops to Filter Data

This tutorial will guide you through using loops in Kinesis API to filter a list of data. You'll create an API endpoint that fetches a list of creatures and returns only the ones that match a specific criterion passed in the URL. This is a great way to learn how to build more dynamic and powerful API logic.

Prefer video tutorials? You can follow along with our YouTube walkthrough of this same project.

Prerequisites

Before you begin, you need:

  1. Access to a Kinesis API instance:
  2. A user account with ADMIN or ROOT privileges
  3. Basic understanding of REST APIs and the Flow Editor

1. Creating a Project

First, let's create a project for our creatures API:

  1. Log in to your Kinesis API instance
  2. Navigate to the Projects page from the main menu
  3. Click "Create a new project" to open the project creation modal
  4. Fill in the following details:
    • Name: "Creatures"
    • ID: "creatures"
    • Description: "To have fun with some beings." (or anything else you want)
    • API Path: "/creatures" (this will be the base URL path for all routes in this project) Create Project
  5. Click "Create" to save your project

2. Creating a Collection

Next, we'll create a collection to store our creature data:

  1. From your newly created project page, click "Create New" on the "Collections" section
  2. Fill in the following details:
    • Name: "Creatures"
    • ID: "creatures"
    • Description: "To store details on every creature." (or anything else you want) Create Collection
  3. Click "Create" to save your collection

Project Page

3. Creating Structures

Now we'll create some structures (fields) to store our creature value:

  1. From the "Creatures" collection page, locate the "Structures" section
  2. Click "Create New" to add a structure
  3. Fill in the following details:
    • Name: "Species"
    • ID: "species"
    • Description: "" (leave blank or insert anything else)
    • Type: "TEXT"
    • Required: Check this box
    • Unique: Check this box Create Species Structure
  4. Click "Create" to save the structure
  5. Click "Create New" to add another structure
  6. Fill in the following details:
    • Name: "Is a pet"
    • ID: "is_pet"
    • Description: "Whether the species is considered as a pet." (leave blank or insert anything else)
    • Type: "BOOLEAN"
    • Required: Check this box Create Is Pet Structure
  7. Click "Create" to save the structure

Collection Page

4. Creating Data

Now we'll create some initial data objects for our creatures:

  1. Navigate to the "Data" section in the top navigation bar
  2. Select your project and then the "Creatures" collection
  3. Click "Create New" to create a data object
  4. Add a nickname (optional)
  5. For the "species" field, enter: "Dog"
  6. For the "is a pet" field, check the checkbox Create Data
  7. Click "Create" to save your data object
  8. Repeat the above steps as many times as you want for different species and whether they are typically considered to be pets
  9. Those are the data objects that have been created as example for this tutorial:
    • Creature 1:
      • species: "Dog"
      • is_pet: true
    • Creature 2:
      • species: "Cat"
      • is_pet: true
    • Creature 3:
      • species: "Scorpion"
      • is_pet: false
    • Creature 4:
      • species: "Kangaroo"
      • is_pet: false
    • Creature 5:
      • species: "Hamster"
      • is_pet: true

Data Page

5. Creating a Route

Now, we'll create a route that filters these creatures based on whether they are a pet:

  1. Navigate to Routes in the top navigation bar
  2. Select your project
  3. Click "Create New" to create a route
  4. Fill in the route details:
    • Route ID: "fetch_creatures"
    • Route Path: "/fetch"
    • HTTP Method: "GET"
  5. In the URL Parameters section, set the delimiter to be "&", then click "Add" and define a parameter:
    • ID: "pet"
    • Type: "BOOLEAN"
  6. The "JWT Authentication" and "Body" sections don't need to be modified Create Route

Building the Route Logic in the Flow Editor

The Flow Editor is where we define what happens when our route is called. We'll build a flow that fetches all creatures, loops through them, and returns only the ones that match the pet URL parameter.

Follow these steps:

  1. Add a FETCH block:

    • Drag a FETCH block onto the canvas and connect the START node to it
    • Fill in the following details:
      • Local Name: "_allCreatures"
      • Reference Collection: "creatures"

    Fetch Block

  2. Add a PROPERTY block:

    • Drag a PROPERTY block onto the canvas and connect the FETCH block to it
    • Fill in the following details:
      • Local Name: "_allCreaturesLength"
      • Property Apply: "LENGTH"
    • As for the data section, fill in the following details:
      • Reference: Check this box
      • Type: "Array"
      • Data: "_allCreatures"

    Property Block

  3. Add an ASSIGNMENT block:

    • Drag an ASSIGNMENT block onto the canvas and connect the PROPERTY block to it
    • Fill in the following details:
      • Local Name: "_foundCreatures"
    • Add an "operation" submodule with 1 operand, and fill in the following details:
      • Reference: Leave unchecked
      • Type: "Array"
      • Data: ""
    • Operation Type: "None"

    Assignment Block

  4. Add a LOOP block:

    • Drag a LOOP block onto the canvas and connect the ASSIGNMENT block to it
    • Fill in the following details:
      • Local Name: "index"
    • Fill in the following details for the start section:
      • Reference: Leave unchecked
      • Type: "Integer"
      • Data: "0"
    • Fill in the following details for the end section:
      • Reference: Check this box
      • Type: "Integer"
      • Data: "_allCreaturesLength"
    • Leave everything else as it is

    Loop Block

  5. Add a PROPERTY block:

    • Drag a PROPERTY block onto the canvas and connect the previous LOOP block to it
    • Fill in the following details:
      • Local Name: "_creature"
      • Property Apply: "GET_INDEX"
      • Additional: "index"
    • As for the data section, fill in the following details:
      • Reference: Check this box
      • Type: "Array"
      • Data: "_allCreatures"

    Property Block

  6. Add another PROPERTY block:

    • Drag another PROPERTY block onto the canvas and connect the previous PROPERTY block to it
    • Fill in the following details:
      • Local Name: "_species"
      • Property Apply: "GET_PROPERTY"
      • Additional: "species"
    • As for the data section, fill in the following details:
      • Reference: Check this box
      • Type: "Other"
      • Data: "_creature"

    Property Block

  7. Add another PROPERTY block:

    • Drag another PROPERTY block onto the canvas and connect the previous PROPERTY block to it
    • Fill in the following details:
      • Local Name: "_is_pet"
      • Property Apply: "GET_PROPERTY"
      • Additional: "is_pet"
    • As for the data section, fill in the following details:
      • Reference: Check this box
      • Type: "Other"
      • Data: "_creature"

    Property Block

  8. Add an ASSIGNMENT block:

    • Drag an ASSIGNMENT block onto the canvas and connect the PROPERTY block to it
    • Fill in the following details:
      • Local Name: "_foundCreatures"
    • Add a "condition" submodule with 2 operands, and fill in the following details:
      • First Operand:
        • Reference: Check this box
        • Type: "Boolean"
        • Data: "_is_pet"
      • Second Operand:
        • Reference: Check this box
        • Type: "Boolean"
        • Data: "pet"
    • Condition Type: "Equal To"
    • Add an "operation" submodule with 2 operands, and fill in the following details:
      • First Operand:
        • Reference: Check this box
        • Type: "Array"
        • Data: "_foundCreatures"
      • Second Operand:
        • Reference: Check this box
        • Type: "Array"
        • Data: "_species"
    • Operation Type: "Addition"

    Assignment Block

  9. Add an END_LOOP block:

    • Drag an END_LOOP block onto the canvas and connect the ASSIGNMENT block to it
    • Fill in the following details:
      • Local Name: "index"

    End Loop Block

  10. Add a RETURN block:

    • Drag a RETURN block onto the canvas and connect the END_LOOP block to it
    • Add an "object pair" submodule
    • Fill in the following details:
      • id: "creatures"
      • data:
        • Reference: Check this box
        • Type: "Array"
        • Data: "_foundCreatures"

    Return Block

  11. Ensure that your Flow Editor looks like the following: Flow Editor

  12. Click "Create" to save your route

Route Page

6. Testing via Playground

Now let's test our creatures API:

  1. Navigate to Playground in the top navigation bar
  2. Select your project
  3. Click on the "FETCH_CREATURES" route Playground Project Page
  4. Enter the value "true" for the "pet" field in the "URL Parameters" section
  5. Click "Send" to make a request to your API
  6. You should see a response like:
    {
      "creatures": ["dog", "cat", "hamster"]
    }
    
  7. Enter the value "false" for the "pet" field in the "URL Parameters" section
  8. Click "Send" again
  9. You should see a response like:
    {
      "creatures": ["scorpion", "kangaroo"]
    }
    

Playground Request Page

Congratulations!

You've successfully built an API that uses a loop to filter data based on user input. You have learned how to:

  • Fetch a list of data objects
  • Create an empty array to store results
  • Loop through a data list
  • Use conditional logic within a loop
  • Add items to an array dynamically
  • Return the filtered results

Next Steps

Challenge yourself by extending this project:

  • Add more structures: Add a diet (e.g., "carnivore", "herbivore") or habitat structure to your collection
  • More complex filtering: Modify the route to accept multiple filter parameters (e.g., pet and diet)
  • Implement sorting: After filtering, use another loop or a different block to sort the results alphabetically by species

Continuous Improvement

Note: Kinesis API is continuously evolving based on user feedback. As users test and provide suggestions, the platform will become simpler, more intuitive, and easier to use. This tutorial will be updated regularly to reflect improvements in the user experience and new features.

We value your feedback! If you have suggestions for improving this tutorial or the Kinesis API platform, please reach out through our contact page or consider raising a new ticket.