Asoba Ona Documentation

Core Concepts

Understanding these fundamental concepts will help you make the most of Ona’s energy forecasting platform. This section explains how our system works, key terminology, and the principles behind accurate energy predictions.


How Ona Works

The Forecasting Pipeline

graph LR
    A[Historical Data] --> B[Data Preprocessing]
    B --> C[Feature Engineering]
    C --> D[Model Training]
    D --> E[Model Validation]
    E --> F[Deployment]
    F --> G[Real-time Forecasting]
    G --> H[Results Delivery]

1. Data Ingestion & Preprocessing

2. Feature Engineering

3. Model Training

4. Forecasting & Delivery


Data Architecture

Regional Deployment

Ona operates in multiple regions for data sovereignty and performance:

Region Endpoint Coverage Data Center
Africa af-south-1 Sub-Saharan Africa Cape Town
North America us-east-1 USA, Canada Virginia
Europe eu-west-1 European Union Ireland
Asia Pacific ap-southeast-1 APAC region Singapore

Benefits:

Data Flow

CSV Upload → Lambda@Edge → Regional API Gateway → 
Preprocessing Service → S3 Data Lake → SageMaker Training → 
Model Registry → Inference Service → Results Storage

Forecasting Types

Historical Data Training

Purpose: Train custom models using your site’s historical data

Requirements:

Process:

  1. Upload via /upload_historical endpoint
  2. Trigger training via /train endpoint
  3. Model validation and backtesting
  4. Model deployment for live forecasting

Nowcast Forecasting

Purpose: Real-time forecasting using recent data

Use Cases:

Features:

Long-term Forecasting

Purpose: Strategic planning and capacity optimization

Horizons:

Applications:


Model Types & Algorithms

Ensemble Methods

Ona uses multiple algorithms to maximize accuracy:

XGBoost (Gradient Boosting):

LSTM Neural Networks:

Prophet (Time Series):

Weather-Aware Models:

Model Selection Logic

def select_model(forecast_horizon, data_quality, use_case):
    if forecast_horizon <= 24:  # Hours
        if data_quality > 0.95:
            return "XGBoost + Weather"
        else:
            return "LSTM + Interpolation"
    
    elif forecast_horizon <= 168:  # 1 week
        return "LSTM + Prophet Ensemble"
    
    else:  # Long-term
        return "Prophet + Seasonal Decomposition"

Data Quality & Preprocessing

Data Validation Pipeline

1. Format Validation

2. Quality Assessment

quality_metrics = {
    "completeness": missing_data_percentage,
    "consistency": interval_regularity_score, 
    "accuracy": outlier_detection_score,
    "freshness": data_recency_hours
}

3. Automatic Corrections

Data Enrichment

Weather Integration:

Calendar Features:


Accuracy & Performance Metrics

Standard Metrics

Mean Absolute Error (MAE):

MAE = (1/n) * Σ|actual - predicted|

Root Mean Square Error (RMSE):

RMSE = √[(1/n) * Σ(actual - predicted)²]

Mean Absolute Percentage Error (MAPE):

MAPE = (100/n) * Σ|((actual - predicted)/actual)|

Advanced Metrics

Forecast Skill Score:

Skill = 1 - (MAE_model / MAE_baseline)

Pinball Loss (Quantile Accuracy):

Benchmark Performance

Use Case Typical MAE Target RMSE Skill Score
Residential Solar PV 10-15% 15-25% 0.2-0.4
Commercial Load 6-12% 10-18% 0.3-0.5
Utility-Scale Solar 8-12% 12-20% 0.4-0.6
Wind Power 15-25% 20-35% 0.2-0.3

Integration Patterns

Batch Processing

Use Case: Daily forecast generation for large portfolios

# Example batch workflow
def daily_batch_process():
    sites = get_active_sites()
    
    for site in sites:
        # Upload latest data
        upload_nowcast_data(site)
        
        # Generate forecasts
        forecast = generate_forecast(site, horizon='24h')
        
        # Store results
        store_forecast(site, forecast)
        
        # Send alerts if needed  
        check_forecast_alerts(site, forecast)

Real-time Streaming

Use Case: Live grid operations and trading

# Real-time processing
def process_real_time_data(site_data):
    # Validate incoming data
    validated_data = validate_stream(site_data)
    
    # Update running models
    update_online_model(validated_data)
    
    # Generate immediate forecast
    forecast = predict_next_interval(validated_data)
    
    # Publish to downstream systems
    publish_forecast(forecast)

Event-Driven Architecture

Use Case: Webhook-based forecast delivery

{
  "event": "forecast_ready",
  "site_id": "SE123456789", 
  "forecast_time": "2024-01-15T08:00:00Z",
  "horizon_hours": 24,
  "accuracy_metrics": {
    "mae": 0.12,
    "rmse": 0.18,
    "skill_score": 0.35
  },
  "download_url": "https://s3.amazonaws.com/ona-forecasts/..."
}

Error Handling & Reliability

Common Error Scenarios

1. Data Quality Issues

2. Model Performance Degradation

3. API Availability

Reliability Features

Multi-Region Failover:

Graceful Degradation:

Monitoring & Alerting:


Security & Compliance

Data Protection

Encryption:

Access Control:

Data Retention:

Compliance Standards

Certifications:

Industry Standards:


Next Steps

Now that you understand how Ona works:

  1. Start building: API Reference for detailed endpoints
  2. See examples: Use Cases for real implementations
  3. Try the SDK: SDK Documentation for easier integration
  4. Get support: Contact us for technical questions

Get Help & Stay Updated

Contact Support

For technical assistance, feature requests, or any other questions, please reach out to our dedicated support team.

Email Support Join Discord

Subscribe to Updates

* indicates required

© 2025 Asoba Corporation. All rights reserved.