Combined Workflow Methodology#

Tip

This section describes the integrated methodology that combines BC_Nexus (CLEWs), PyPSA_BC, and the bidirectional linking tool for comprehensive energy system analysis.

Workflow Overview#

The BC Combined Modelling framework integrates three main components:

  1. BC_Nexus - CLEWs-based renewable energy resource assessment

  2. PyPSA_BC - Power system optimization modeling

  3. Linking Tool - Bidirectional data exchange and consistency checking

Detailed Combined Workflow

Integrated Methodology#

Phase 1: Resource Assessment (BC_Nexus)#

The CLEWs-based BC_Nexus model performs comprehensive renewable energy resource assessment:

Step 1: Spatial Resource Mapping#

  • Renewable Resource Potential: Solar and wind resource mapping using high-resolution meteorological data

  • Land Use Constraints: Integration of protected areas, urban zones, and infrastructure exclusions

  • Grid Integration Points: Identification of optimal connection points to existing transmission infrastructure

Step 2: Temporal Analysis#

  • Resource Variability: Hourly renewable energy profiles for multiple years

  • Demand Patterns: Regional electricity demand characterization

  • Storage Requirements: Assessment of energy storage needs for system balancing

Step 3: Scenario Development#

  • Technology Scenarios: Various renewable energy technology deployment pathways

  • Policy Scenarios: Different regulatory and incentive frameworks

  • Infrastructure Scenarios: Transmission expansion alternatives

Phase 2: Data Linking and Translation#

The linking tool ensures seamless data exchange between modeling frameworks:

Step 1: Data Format Conversion#

from bc_combined_modelling.clews_to_pypsa import CLEWSPyPSALinker

# Initialize data converter
linker = CLEWSPyPSALinker(
    clews_output='results/bc_nexus/',
    pypsa_input='data/pypsa_inputs/'
)

# Convert renewable profiles
renewable_profiles = linker.convert_renewable_profiles(
    technologies=['solar_pv', 'wind_onshore'],
    temporal_resolution='hourly'
)

# Convert demand profiles
demand_profiles = linker.convert_demand_profiles(
    regions=['lower_mainland', 'vancouver_island'],
    sectors=['residential', 'commercial', 'industrial']
)

Step 2: Spatial Aggregation#

  • Regional Clustering: Aggregation of high-resolution resource data to PyPSA network nodes

  • Transmission Mapping: Linking of resource locations to transmission infrastructure

  • Constraint Translation: Conversion of land-use constraints to PyPSA capacity limits

Step 3: Temporal Alignment#

  • Resolution Matching: Alignment of different temporal resolutions between models

  • Scenario Consistency: Ensuring consistent assumptions across modeling frameworks

  • Data Validation: Quality checks and consistency verification

Phase 3: Power System Optimization (PyPSA_BC)#

PyPSA_BC performs detailed power system analysis using data from BC_Nexus:

Step 1: Network Modeling#

  • Transmission Network: Detailed electrical network representation

  • Generator Fleet: Existing and potential generation assets

  • Storage Systems: Battery, pumped hydro, and other storage technologies

Step 2: Optimization#

from bc_combined_modelling.pypsa_bc import PyPSABCInterface

# Initialize PyPSA interface
pypsa = PyPSABCInterface(
    network_file='data/bc_network.h5',
    renewable_profiles=renewable_profiles,
    demand_profiles=demand_profiles
)

# Run optimization
results = pypsa.optimize_network(
    objective='cost_minimization',
    constraints=['reliability', 'emissions'],
    time_horizon='2020-2050'
)

Step 3: System Analysis#

  • Capacity Planning: Optimal renewable energy deployment

  • Operational Analysis: Hourly system dispatch and storage operation

  • Grid Integration: Transmission expansion requirements

Phase 4: Results Integration and Analysis#

Bidirectional Feedback Loop#

The framework incorporates feedback between models:

from bc_combined_modelling import CombinedWorkflow

# Initialize combined workflow
workflow = CombinedWorkflow()

# Iterative analysis with feedback
for iteration in range(max_iterations):
    # Run BC_Nexus with current assumptions
    clews_results = workflow.run_bc_nexus(
        infrastructure_limits=pypsa_constraints
    )
    
    # Update PyPSA with CLEWS results
    pypsa_results = workflow.run_pypsa_bc(
        renewable_profiles=clews_results['profiles'],
        capacity_limits=clews_results['potentials']
    )
    
    # Check convergence
    if workflow.check_convergence(iteration):
        break
    
    # Update constraints for next iteration
    pypsa_constraints = workflow.extract_constraints(pypsa_results)

Key Innovations#

1. Enhanced Storage Modeling#

The framework includes advanced storage modeling capabilities:

Battery Storage#

  • Technology Characterization: Detailed battery performance parameters

  • Degradation Modeling: Capacity and efficiency degradation over time

  • Grid Services: Multiple revenue streams including energy arbitrage and ancillary services

Hydro Storage#

  • Cascaded Systems: Multi-reservoir hydro system optimization

  • Environmental Constraints: Minimum flow requirements and ecological limits

  • Seasonal Patterns: Long-term energy storage and seasonal shifting

2. Multi-Resolution Temporal Analysis#

Timeslice Aggregation/Disaggregation#

from bc_combined_modelling.utils import TemporalProcessor

# Configure temporal resolution
temporal_processor = TemporalProcessor()

# Aggregate to representative periods for optimization
representative_periods = temporal_processor.aggregate_timeslices(
    full_time_series=hourly_data,
    method='k_means_clustering',
    num_periods=12  # Monthly representatives
)

# Disaggregate results back to full resolution
full_results = temporal_processor.disaggregate_results(
    representative_results=optimization_results,
    target_resolution='hourly'
)

3. Automated Data Pipeline#

CLEWs Data Preparation#

  • Automated Downloads: Meteorological, land use, and infrastructure data

  • Data Processing: Standardized processing pipelines for different data sources

  • Quality Control: Automated data validation and error checking

Scenario Analysis Framework#

Scenario Definition#

# Example scenario configuration
scenarios:
  baseline:
    description: "Current policies and trends"
    renewable_targets:
      solar: 1000  # MW
      wind: 2000   # MW
    carbon_price: 50  # CAD/tCO2
    
  high_renewable:
    description: "Accelerated renewable deployment"
    renewable_targets:
      solar: 5000  # MW
      wind: 8000   # MW
    carbon_price: 100  # CAD/tCO2
    
  storage_focus:
    description: "Enhanced storage deployment"
    storage_targets:
      battery: 2000  # MWh
      pumped_hydro: 5000  # MWh

Results Comparison#

# Compare scenarios
comparison = workflow.compare_scenarios(
    scenarios=['baseline', 'high_renewable', 'storage_focus'],
    metrics=['cost', 'emissions', 'reliability', 'renewable_share']
)

# Generate comparison report
workflow.generate_comparison_report(
    comparison_results=comparison,
    output_file='results/scenario_comparison.html'
)

Validation and Uncertainty Analysis#

Model Validation#

  • Historical Validation: Comparison with historical system performance

  • Cross-Model Validation: Consistency checks between BC_Nexus and PyPSA_BC results

  • Sensitivity Analysis: Parameter sensitivity and uncertainty quantification

Uncertainty Quantification#

# Monte Carlo uncertainty analysis
uncertainty_results = workflow.run_uncertainty_analysis(
    parameters=['demand_growth', 'renewable_costs', 'carbon_price'],
    distributions={
        'demand_growth': 'normal(0.02, 0.01)',
        'renewable_costs': 'uniform(-0.3, 0.1)',
        'carbon_price': 'triangular(30, 50, 120)'
    },
    num_samples=1000
)

Performance Optimization#

Computational Efficiency#

  • Parallel Processing: Multi-core utilization for independent calculations

  • Memory Management: Efficient handling of large time series datasets

  • Caching: Intermediate result caching to avoid redundant calculations

Scalability Considerations#

  • Regional Extensions: Framework designed for expansion to other regions

  • Technology Additions: Modular structure for adding new technologies

  • Resolution Flexibility: Adaptable spatial and temporal resolution

Best Practices#

Workflow Execution#

  1. Data Preparation: Ensure all required data is available and validated

  2. Configuration Review: Verify scenario parameters and model settings

  3. Incremental Development: Start with simplified scenarios before full complexity

  4. Result Validation: Cross-check results between different model components

  5. Documentation: Maintain detailed logs of assumptions and modifications

Quality Assurance#

  • Version Control: Track model versions and input data changes

  • Reproducibility: Ensure results can be replicated with saved configurations

  • Peer Review: Regular review of methodology and results by domain experts


For implementation details, see the API Reference. For troubleshooting, see the Troubleshooting Guide.