iph.ar
  • home
  • Misc.
  • Gaming
  • (de)Engineering
    • Using LLMs to Simulate Wild Thinking and Convergence
    • Visualizing Song Structure with Self-Similarity Matrices
    • PyHPC – Self-Guided Parallel Programming Workshop with Python
      • 00 - How This Course Was Made
      • 01 - Multiprocessing
      • 02 - Multithreading and GIL
      • 03 - MPI with mpi4py
      • 04 - GPU with PyCUDA/Numba
      • 05 - Parallel Libraries
    • Mapping Consonants as Percussion: A Small Experiment with Whisper and Audio Analysis
    • Semantic Self-Similarity or How I Split a Conversation into Scenes Using Language Models
    • Modeling the Noise: Building a Tinnitus Generator in Python
    • Making A Humble OpenGL Rotating Cube
  • IT
  • home
  • Misc.
  • Gaming
  • (de)Engineering
    • Using LLMs to Simulate Wild Thinking and Convergence
    • Visualizing Song Structure with Self-Similarity Matrices
    • PyHPC – Self-Guided Parallel Programming Workshop with Python
      • 00 - How This Course Was Made
      • 01 - Multiprocessing
      • 02 - Multithreading and GIL
      • 03 - MPI with mpi4py
      • 04 - GPU with PyCUDA/Numba
      • 05 - Parallel Libraries
    • Mapping Consonants as Percussion: A Small Experiment with Whisper and Audio Analysis
    • Semantic Self-Similarity or How I Split a Conversation into Scenes Using Language Models
    • Modeling the Noise: Building a Tinnitus Generator in Python
    • Making A Humble OpenGL Rotating Cube
  • IT

00 - How This Course Was Made

September 2025

This course was not written in the traditional way.
Instead, it was generated from a structured outline and expanded automatically with the help of a local LLM.

Source Materials

  • The outline of the course defines the 5-chapter structure, main topics, and proposed exercises.

  • The expansion script reads that outline, splits it into sections, and sends structured prompts to a local LLM for expansion into Markdown files.

  • The language model used is Gemma 3n 4B, a relatively compact open model, served locally through a KoboldCPP endpoint.
    Each section was generated using short, consistent prompts to produce concise, lecture-ready notes.

Workflow

  1. Curate the schedule
    The initial schedule was written manually with a focus on progression (CPU → threads/async → distributed → GPU → high-level libraries).
    Each chapter includes:

    • Key topics
    • Proposed exercises
    • Notes on infrastructure, troubleshooting, or optional advanced material
  2. Automated expansion

    • The script parses the outline into individual topics.
    • For each topic, it sends a prompt to the LLM.
    • The model returns a short Markdown section with:
      • A Key Concept
      • A bullet list of Topics
      • Optional In-Session Exercise, Pitfalls, and Best Practices
    • Sections are written to separate .md files in the output directory.
  3. Post-processing
    Some adjustments are applied to ensure consistent formatting across all sections:

    • Headings normalized
    • Bullet styles unified
    • Trimming extra text

Why This Approach?

  • Reproducibility: anyone can regenerate the course by running the script locally.
  • Consistency: the same prompt structure was applied to all sections.
  • Efficiency: a compact LLM (Gemma 3n 4B) produces usable drafts quickly without heavy infrastructure.
  • Transparency: the entire pipeline (outline + script + model outputs) can be versioned in GitHub.

Important Note

This course is intended as structured lecture notes + exercises, not as a replacement for textbooks or documentation.
The design assumes learners can and will consult external references for deeper understanding.

The meta-process itself can be seen as an example of automation in education pipelines—an idea closely related to the broader themes of parallelism and efficiency in HPC (this is, the way we generated the course is itself a small-scale analogy of the concepts taught in the course.)


01 - Multiprocessing ⟶
  • © 2025 iph.ar