Program Yourself

As we build this Personal Knowledge Management (PKM) system, you can track the PKM projects as we work through the phases in the 100-day process or you might want to follow the journal in the PKM repo, particularly the recent days:

The first 100-day AncientGuy Fitness push of our preparation for this larger program was FOUNDATIONAL and involved better understanding how we might improve in ten different aspect of fitness as foundational building blocks, necessary for the foundation to continually sustain our capabilities to durably work, be ready to serve, capable of exploring new levels of knowledge and understanding, including:

Although these ten areas are FOUNDATIONAL, they must be continually addressed and improved.

If ANYTHING, 100 days of delving deeply into these topics has reaffirmed our conviction that much of what we are told is at least not exactly appropriate, worthy of extreme skepticism and more than likely just flat out wrong in the worst possible way that something can be wrong, ie many aspects are PARTIALLY right or make sense in certain contexts but not all contexts. Much of the information on diet, health, fitness that has become part of the conventional wisdom is like the 1992 USDA Food Pyramid -- comfortable, addictively deleterious to health, longevitity and effectiveness and enjoyment, especially after 65.

COMFORT KILLS! Mostly, it's the complacency and entitlement to comfort that so extensively weakens the ability to provide for one's sustenance or security.

If anyone follows the conventional wisdom, the best that they can expect is to have a lifespan that matches the Social Security actuarial tables AND in the final years of their lives, they will generally be unhealthy and unfit, addicted to prescription medicines and in constant need of medical care and attention.

Now it's time for the second 100-day push. Thus, the purpose driving this site is a 100-day curriculum or Personal Knowledge Management (PKM) system. The PKM system that we will develop is not exactly going to be an end in and of itself ... although it sort of could be -- the larger purpose of this 100-day curriculum is to build a base, for the third 100-day push to build a Personal Knowledge Engineering (PKE) system. These first 300 days will shape the next 100-day day pushes which are likely about deeper tech-assisted contemplation and deeper investigations of what recovery, rehabilitation and continual rejuvenation are about.

Personal Knowledge Management is all about CONTEXT ENGINEERING

This includes various structures of assuring context for coherent processing ... any sort of news, sports, weather data that is pushed at you by others who manage your information flow is going to be about somebody else's context -- thus, effectively managing one's personal knowledge is entirely about context engineering and developing the architecture of the toolchains that one uses to engineer context -- of course, AI has made it possible for mere mortals, not superprogrammers, to develop the tools that give us the KNOWLEDGE that we need to shape our lives.

It is up to each of us to DEVELOP our Lives ... ... judiciously, but aggressively USING the talents, tools, technologies that we have been blessed with ... thus, programming ourselves is a matter of expressing gratitude -- we do it in order to continue to more fully develop the richness of our lives, to take responsiblity for programming ourselves, mastering and wielding information technology in order to understand its danger and misuse, as we INFORM ourselves. Many who just consume content or food are VERY EFFECTIVELY being weaponized to be weak, hyper-defensive, reactionary liabilities to their communities, their circles of friends or professional colleagues, their families, themselves.

Both the PKM and PKE systems are implementations based on the both best thinking in extending our intellectual capabilities, such as Building a Second Brain (BASB) or other thinking that drives some of the best personal knowledge notetaking digital tools. In other words, both the PKM and PKM curricula are therefore exercises in reinventing wheels to some degree, because they involve mastering and tweaking technologies and tool which already function plenty well without any further automation or AI/ML ops engineering heavy lifting.

The real point, is not so much about the tech-assisted development of these capablities, and really using all of the tools and technologies such as the free and open source distributed version control system, Git and various forms of Git workflows and improved approaches to Git branching. Using Git only scratches the surface of what kinds of features that a hub like GitHub provides such Actions, for automating workflows or Projects, Discussions, and Issues to drive development. Of course, using Git and GitHub typically involves using full-featured integrated development environments (IDEs) like Visual Studio Code well developed AI-assisted extensions for those IDEs such as Cline utilizing the best LLM models on OpenRouter ... but as a next step, we realize that development use cases must be able to accomplished with tools like code-in-browser tools like ONA which runs VS Code entirely on any device with sandboxed dev environments either in the Ona cloud or your VPC, to run full VS Code in the browser or also on a smartphone. ... moving to ONA for agentic multi-environment development, to allow for parallel-track AI-assisted work as development workflows evolve over time.

The real point or true objective of all this, is not so much about the tech-assisted development of these capablities, rather, it is a meta-objective, which is about stretching or extending human cognitive capabilities with these technologies for proficiencies necessary to pursue even more advanced levels of knowledge engineering.

The 100-Day Architect: A Blueprint for an AI-Augmented Personal Knowledge Management System

You can, and probably should, use your own preferences and needs for a PKM to develop a better system for you for accomplishing this objective ... the important thing however, is to just get started with some sort of viable 100-day plan and then just steadily work at it. You can tear the plan up and start over after 30 days, but it's important to just get a plan to together that breaks down the work into manageable daily chunks and then get after it.

Introduction: The PKM as a Development Project

This report outlines a 100-day, 100-module plan for the systematic overhaul and AI-augmentation of a Personal Knowledge Management (PKM) system. The core philosophy of this endeavor is to treat the PKM not as a static repository of notes, but as a dynamic, evolving software project. This approach transforms the act of knowledge management from passive collection into an active process of system architecture, development, and continuous improvement. The 100-day journey is structured as a comprehensive development lifecycle, progressing from foundational infrastructure setup to the implementation of advanced, custom-built, AI-driven features.

The architecture of this system is organized into five distinct phases, each building upon the capabilities established in the previous one. This creates a layered "stack" of functionality, starting with a solid, version-controlled foundation and culminating in a highly intelligent, automated environment for learning and exploration.

A central architectural decision underpins this entire plan: the positioning of the GitHub ecosystem as the core operating system for the PKM. The user's goal to gain experience with GitHub Actions, Issues, Projects, and Discussions is not treated as a separate learning objective but as the strategic foundation for the entire system.1 This unified platform provides the necessary components to manage a complex, multi-tool environment. GitHub Issues will serve as the primary interface for managing the lifecycle of each knowledge topic, from initial idea to completed exploration.3 GitHub Projects will provide the high-level roadmaps and Kanban boards for tracking progress across all learning endeavors.5 Most critically, GitHub Actions will function as the system's central automation engine—its "kernel"—orchestrating every other component, from note processing and AI analysis to the final publication of the knowledge base.1 This integrated approach ensures that all disparate tools work in concert, managed by a single, powerful, and version-controlled platform.

Technology Stack and Phased Integration

The following table provides a strategic overview of the technologies to be integrated throughout this 100-day project. It outlines each component's primary role within the PKM ecosystem and the specific phases during which it will be introduced and mastered. This serves as a high-level roadmap, clarifying not only what will be learned, but when and why it is being introduced into the system architecture.

TechnologyPrimary RolePrimary Phases
GitHub (Repo, Issues, Projects)PKM Operating System, Task & Knowledge Lifecycle ManagementI, II, IV, V
GitHub ActionsCentral Automation & CI/CD EngineI, IV, V
VSCodePrimary Development & Note-Authoring EnvironmentI
Foam ExtensionNote Creation, Bi-directional Linking, Graph VisualizationI, II
mdBookStatic Site Generation & Public Knowledge Base PublishingI, II, IV
PythonAutomation Scripting, API Integration, Backend LogicII, III, IV
OpenRouterUnified AI Gateway for Accessing Multiple LLM ProvidersIII, IV, V
Google AI StudioRapid AI Prompt Prototyping & ExperimentationIII
Hugging Face TransformersSpecialized NLP Models (e.g., Summarization)III
OllamaLocal, Private Large Language Model (LLM) InferenceIV, V
DockerContainerization for Reproducible Environments & ServicesIV
RustHigh-Performance Custom Tooling & System UtilitiesV
Modular Platform (Mojo, MAX)High-Performance AI Inference & Programming ExplorationV

Phase I: The Developer's Knowledge Foundation (Modules 1-20)

Focus: Establishing a rock-solid, automated foundation for the PKM. This phase is about building the "scaffolding" and the core "DevOps" pipeline for your knowledge.

Modules 1-5: Project Scaffolding with GitHub

The initial modules focus on establishing the project's central repository, which will serve as the single source of truth for all knowledge, code, and configuration. This is the foundational step in treating the PKM as a formal development project.

  1. Repository Creation and Initialization: A new private repository will be created on GitHub. This repository will house the entire PKM system, including Markdown notes, automation scripts, configuration files, and the mdBook source. Initializing the repository with a README.md file, a .gitignore file (configured for Python, Node.js, and Rust build artifacts), and a clear directory structure (/notes, /scripts, /book_src) is the first task.
  2. GitHub Projects for Meta-Tracking: Before managing knowledge topics, the system must manage itself. A GitHub Project will be created to track the progress of this 100-day plan.5 This project will be configured with a Kanban board layout, with columns such as "To Do," "In Progress," and "Done".2 This provides immediate, practical experience with the project management tools that will later be applied to learning topics.
  3. Structuring the 100-Day Plan as GitHub Issues: Each of the 100 modules in this plan will be created as a distinct GitHub Issue.3 This modularizes the work and allows for detailed tracking. Using GitHub's issue creation features, each module can be documented, discussed, and managed individually.2
  4. Custom Fields and Project Views: The GitHub Project will be enhanced with custom fields to add rich metadata to each module's Issue. Fields such as "Phase" (e.g., "I: Foundation"), "Status" (e.g., "Not Started"), and "Technology" (e.g., "GitHub Actions") will be created.3 This allows for the creation of powerful, filtered views, such as a roadmap layout to visualize the timeline or a table view to group modules by technology.2
  5. Establishing Branching Strategy and Workflow: A simple Git branching strategy, such as GitFlow or a main-branch workflow, will be established. All work will be done on feature branches and merged into the main branch via pull requests. This enforces good version control hygiene from the outset and prepares the project for automated checks and workflows that trigger on pull requests.3

Modules 6-10: Mastering the VSCode + Foam Environment

With the repository structured, the focus shifts to configuring the local development and note-taking environment. VSCode, augmented with the Foam extension, provides a powerful, free, and open-source platform for creating and navigating a graph-based knowledge base.8

  1. VSCode and Foam Workspace Setup: The process begins by cloning the newly created GitHub repository to a local machine. Following the official Foam documentation, the foam-template project will be used to scaffold the necessary workspace configuration within the repository.8 This involves setting up the
    .vscode/settings.json and .vscode/extensions.json files, which define the workspace's behavior and recommend essential extensions.8
  2. Core Foam Features - Linking and Graphing: This module is a deep dive into Foam's core functionality. The focus will be on creating atomic notes—single files dedicated to a single topic—and connecting them using [[wikilinks]].9 Practical exercises will involve creating a few sample notes and linking them to observe how the knowledge graph is built. The
    Foam: Show Graph command will be used to visualize these connections, providing a tangible representation of the relationships between ideas.9
  3. Navigation and Discovery with Backlinks: Understanding connections is a two-way street. This module will explore Foam's backlinking capabilities. The Backlinks Panel will be used to see which other notes reference the currently active note, providing crucial context and aiding in the discovery of emergent themes and relationships within the knowledge base.9
  4. Installation and Review of Recommended Extensions: The foam-template recommends a set of VSCode extensions to enhance the Markdown editing experience.8 This module involves installing and reviewing this list, which typically includes tools like
    Markdown All In One, Prettier for formatting, and extensions for Mermaid diagrams and emoji support.12 Understanding the role of each extension is key to customizing the environment for maximum productivity.
  5. Customizing VSCode Settings: The default Foam settings provide a great starting point, but personalization is key. This module involves editing the .vscode/settings.json file to tweak the user experience. This could include changing editor fonts, setting rulers for line length, or customizing how wikilinks are rendered in the editor, ensuring the environment is perfectly tailored to the user's workflow.8

Modules 11-15: mdBook Configuration and Initial Build

The next step is to configure mdBook, the Rust-based tool that will transform the collection of Markdown notes into a clean, searchable, and publishable static website.14

  1. Installing mdBook and Initializing the Book: mdBook will be installed using Rust's package manager, Cargo. Once installed, the mdbook init command will be run within the /book_src directory of the repository. This command creates the initial file structure for the book, including the src directory for content and the all-important SUMMARY.md file, which defines the book's navigation structure.14
  2. Configuring book.toml: The book.toml file is the heart of an mdBook project's configuration. This module involves a thorough exploration of its key options.15 The book's title and author will be set, and the HTML renderer options will be configured. This includes enabling or disabling section labels, adding a link to the source GitHub repository, and selecting a default theme.15
  3. Structuring the SUMMARY.md: The SUMMARY.md file dictates the table of contents and navigation hierarchy of the final website. This module will focus on understanding its syntax. A basic structure will be created, linking to the sample notes created in the Foam modules. This establishes the initial organization of the public-facing knowledge base.
  4. Enabling and Configuring Search: One of mdBook's most powerful features is its built-in, client-side search functionality. In the book.toml file, the search feature will be explicitly enabled and configured.15 Options like
    limit-results, use-boolean-and, and boost-title will be explored to understand how to fine-tune the search experience for users of the knowledge base.15
  5. Performing the First Manual Build: With the initial configuration in place, the mdbook build command will be run from the command line. This compiles the Markdown files from the src directory into a static HTML site in a new /book directory. The resulting site will be opened locally in a browser to verify that the configuration is correct, the links work as expected, and the overall structure is sound. This manual build serves as the baseline for the automated pipeline to come.16

Modules 16-20: The First Automated CI/CD Pipeline

This is the capstone of Phase I, where the manual processes of building and deploying are automated using GitHub Actions. This creates a Continuous Integration/Continuous Deployment (CI/CD) pipeline that ensures the published knowledge base is always in sync with the latest notes.17

  1. Creating the First Workflow File: A new workflow file will be created at .github/workflows/deploy-book.yml. This YAML file will define the automation steps. The workflow will be configured to trigger on a push event to the main branch, meaning it will run automatically every time new changes are committed.16
  2. Configuring the GitHub Actions Job: The workflow will contain a single job, build-and-deploy. This job will be configured to run on an ubuntu-latest runner. The first steps within the job will be to use the actions/checkout action to check out the repository's code onto the runner.17
  3. Installing mdBook on the Runner: To build the book, mdBook must be available on the CI runner. The most efficient method is to download a pre-compiled binary from the GitHub Releases page, which is fast and avoids the need to install the entire Rust toolchain.16 A workflow step will use
    curl to download and extract the mdBook executable.16
  4. Building and Deploying to GitHub Pages: The core of the workflow involves two steps. First, a step will run the mdbook build command, generating the static site in the /book directory. Second, a community action like peaceiris/actions-gh-pages will be used to deploy the contents of the /book directory to a special gh-pages branch in the repository.18 Repository settings will be configured to enable GitHub Pages and set the
    gh-pages branch as the deployment source.19
  5. Identifying the "Impedance Mismatch" and a Manual Workaround: Upon the first successful deployment, a critical challenge will become apparent. The [[wikilinks]] used for fluid navigation within Foam and VSCode are not standard Markdown links and will be broken in the final mdBook output.8 This "impedance mismatch" between the authoring environment and the publishing tool is a central technical hurdle of the chosen stack. Foam provides a command,
    Foam: Create markdown references for [[wikilinks]], which converts these links into a format that mdBook can understand.9 This module concludes by documenting this issue and establishing the manual execution of this command as a temporary workaround. This deliberate identification of a problem creates a clear and compelling motivation for developing a more sophisticated, automated scripting solution in later phases, transforming a potential frustration into a core learning objective of the 100-day plan.

Phase II: Architecting the Knowledge Graph (Modules 21-40)

Focus: Developing a systematic approach to knowledge capture, organization, and presentation. This phase moves from "getting the tools to work" to "using the tools effectively."

Modules 21-25: Knowledge Ingestion Framework

With the foundational infrastructure in place, the focus now shifts to establishing a structured process for exploring the 150 bucket-list topics. This involves leveraging GitHub's project management tools to create a systematic knowledge ingestion pipeline.

  1. Creating the "Topic Exploration" Project Board: A new GitHub Project will be created specifically for managing the 150 learning topics. This project will be configured as a Kanban board, providing a visual workflow for tracking topics as they move from idea to exploration.2
  2. Designing a Standardized Issue Template for Topics: To ensure consistency, a GitHub Issue template will be designed for new topics. This template, stored as a Markdown file in the .github/ISSUE_TEMPLATE directory, will pre-populate new issues with a standardized structure.3 Sections will include "Topic Summary," "Key Questions to Answer," "Initial Resources," and "Potential Connections," guiding the initial phase of research for any new subject.
  3. Populating the Backlog with Initial Topics: As a practical exercise, the first 10-15 topics from the user-provided list of 150 will be created as new Issues using the template designed in the previous module. These issues will form the initial "backlog" in the "Topic Exploration" project board.3
  4. Using Custom Fields for Topic Metadata: The project board will be enhanced with custom fields tailored for knowledge exploration. Fields like "Topic Category" (e.g., "Technology," "History," "Science"), "Priority" (e.g., "High," "Medium," "Low"), and "Status" (e.g., "Backlog," "Researching," "Synthesizing," "Published") will be added to provide richer metadata for each topic.5
  5. Linking Issues to a Milestone: To group related learning goals, a GitHub Milestone will be created, for example, "Q3 Learning Goals." A subset of the topic issues will be assigned to this milestone. This introduces another layer of organization, allowing for tracking progress against larger, time-bound objectives.2

Modules 26-30: Advanced Foam Techniques

This section moves beyond the basics of Foam to leverage its more powerful features for structuring and maintaining a high-quality knowledge graph.9

  1. Creating and Using Note Templates: To standardize the format of different types of notes, Foam's template feature will be implemented. Templates for various knowledge artifacts—such as book summaries, biographies, project overviews, or technology explainers—will be created. Using the Foam: Create New Note from Template command will then become the standard workflow, ensuring consistency and reducing repetitive work.9
  2. Mastering the Tag Explorer and Hierarchical Tags: Tags are a crucial tool for non-hierarchical organization. This module focuses on using the Tag Explorer panel to navigate the knowledge base. A tagging convention will be established, and the power of hierarchical tags (e.g., #tech/python/automation) will be explored to create more granular and organized connections between notes.9
  3. Managing Orphans and Placeholders: A healthy knowledge graph is a connected one. This module addresses graph maintenance by focusing on the "Orphans" and "Placeholders" panels in Foam.9 Orphans (notes with no links) and Placeholders (links to non-existent notes) will be regularly reviewed. A workflow will be established to either integrate orphaned notes into the graph or create new notes for placeholders, ensuring the knowledge base remains coherent and interconnected.10
  4. Embedding Note Content: To create composite documents and avoid content duplication, Foam's note embedding feature (![[note-name]]) will be utilized. This allows the content of one note to be dynamically included within another. This is particularly useful for creating "Maps of Content" (MOCs) or summary pages that pull in information from multiple atomic notes.9
  5. Leveraging Section Linking and Aliases: For more precise connections, linking to specific sections within a note (]) will be practiced.9 Additionally, link aliasing (
    [[note-name|custom display text]]) will be used to make links more readable and context-friendly within the body of a note, improving the overall narrative flow of the written content.9

Modules 31-35: Python for PKM - The First Scripts

This section marks the introduction of custom automation with Python. The initial scripts will focus on automating common maintenance and organization tasks within the knowledge base, demonstrating the power of scripting to manage the PKM at scale.21

  1. Setting Up the Python Environment: A local Python development environment will be configured. This includes installing a recent version of Python and using a virtual environment manager like venv to isolate project dependencies. The first script will be a simple "hello world" to verify the setup.
  2. Script 1: File Organizer based on Frontmatter: The first practical script will be a file organizer. This Python script will iterate through all Markdown files in the /notes directory. It will parse the YAML frontmatter of each file to read metadata (e.g., category: 'Technology'). Based on this metadata, the script will automatically move the file into a corresponding subdirectory (e.g., /notes/technology/). This automates a tedious organization task and introduces file system operations with Python's os module.22
  3. Script 2: Batch Tagging Utility: Building on the previous script, a batch tagging utility will be created. This script will take a directory and a tag as command-line arguments. It will then scan all files in that directory and append the specified tag to their frontmatter tag list. This is useful for applying a new project tag or category to a group of existing notes simultaneously.21
  4. Reading and Consolidating Notes: A script will be developed to demonstrate content processing. This script will read multiple text files (e.g., daily log files named YYYY-MM-DD.md) and consolidate their content into a single weekly or monthly summary file. This introduces file reading and writing operations and is a foundational step for more complex content analysis later on.21
  5. Integrating Scripts with the Command Line: The scripts will be enhanced to be more user-friendly by using Python's argparse module to handle command-line arguments. This makes them more flexible and reusable, transforming them from simple scripts into proper command-line tools for PKM management.

Modules 36-40: Enhancing mdBook Presentation

The final part of this phase focuses on customizing the appearance and functionality of the public-facing mdBook site, ensuring it is not just a repository of information but a polished and professional presentation of knowledge.

  1. Creating a Custom Theme: While mdBook comes with default themes, creating a custom look is essential for personalization. This module involves creating a theme directory and adding custom CSS files to override the default styles. This could involve changing colors, fonts, and layout to match a personal aesthetic.15
  2. Adding Custom JavaScript for Interactivity: To add dynamic behavior, custom JavaScript files will be integrated. This could be used for simple enhancements like adding a "back to top" button, or more complex features like integrating an external analytics service or adding interactive UI elements.15
  3. Integrating Preprocessors for Rich Content: mdBook's functionality can be extended with preprocessors. This module will explore adding support for features not natively included in Markdown. For example, the mdbook-mermaid preprocessor will be configured to allow for the rendering of Mermaid.js diagrams and flowcharts directly from code blocks, and MathJax support will be enabled for rendering complex mathematical equations.15
  4. Configuring a Professional Deployment: To ensure the deployed site functions correctly, especially with custom domains or subdirectories, the site-url option in book.toml will be properly configured. This is crucial for ensuring that links, CSS, and JavaScript files load correctly on the live server.16
  5. Customizing the 404 Error Page: A professional site needs a helpful error page. A custom 404.md file will be created in the src directory. mdBook will automatically convert this into a 404.html page that provides better navigation and user experience for visitors who encounter a broken link, which is a significant improvement over a generic server error.16

Phase III: AI Augmentation - The Intelligent Assistant (Modules 41-60)

Focus: Integrating a multi-tiered AI strategy to automate content processing and generate new insights. This is the core "AI-ification" phase.

Modules 41-45: AI Gateway Setup - OpenRouter & Google AI Studio

This section lays the groundwork for all future AI integration by setting up access to powerful, flexible AI models through API gateways. This approach provides access to a wide variety of models without being locked into a single provider.

  1. Creating an OpenRouter Account: OpenRouter serves as a unified API gateway to hundreds of AI models from various providers like Anthropic, Google, and Meta.23 An account will be created, and the dashboard will be explored to understand its features, including model availability, pricing, and usage tracking.24
  2. Generating and Securing API Keys: An API key will be generated from the OpenRouter dashboard. To maintain security best practices, this key will not be hard-coded into any scripts. Instead, it will be stored as an encrypted "secret" in the GitHub repository settings.1 This allows GitHub Actions workflows to securely access the key at runtime without exposing it in the codebase.
  3. Introduction to Google AI Studio: Google AI Studio is a web-based tool for rapidly prototyping prompts and experimenting with Google's Gemini family of models.26 It provides an intuitive interface for testing different prompting strategies without writing any code, making it an ideal environment for initial exploration and "vibe coding".26
  4. Prototyping PKM Prompts in AI Studio: Using Google AI Studio, several prompts tailored for PKM tasks will be developed and tested. This includes crafting system prompts for an AI assistant that can summarize long articles, extract key entities (people, places, concepts), generate a list of questions about a topic, or rephrase complex text into simpler terms. The iterative nature of the AI Studio playground allows for quick refinement of these prompts.28
  5. Understanding API Quotas and Billing: A crucial part of using cloud-based AI is managing costs. This module involves reviewing the billing and quota systems for both OpenRouter and Google AI. A budget will be set, and the prepaid credit system of OpenRouter will be explored as a way to control spending.23 Understanding the per-token pricing for different models is essential for making cost-effective choices later on.24

Modules 46-50: Your First AI-Powered Python Script

With API access established, the next step is to bring AI capabilities into the local development environment through Python scripting.

  1. Setting up the Python Environment for API Calls: The Python environment will be prepared by installing necessary libraries, such as requests for making HTTP calls or a provider-specific SDK like openai which is compatible with the OpenRouter API endpoint.23

  2. Script 3: The AI Summarizer: The first AI-powered script will be a text summarizer. This Python script will:
    a. Read the content of a specified Markdown file from the /notes directory.
    b. Construct a prompt using the text content.
    c. Make a POST request to the OpenRouter API endpoint (/api/v1/chat/completions), passing the prompt and selecting a powerful general-purpose model like anthropic/claude-3.5-sonnet or meta-llama/llama-3.1-405b-instruct.24

    d. Parse the JSON response to extract the generated summary.
    e. Print the summary to the console.

  3. Handling API Keys and Responses in Python: The summarizer script will be refactored to securely access the API key from an environment variable rather than hard-coding it. Error handling will also be added to gracefully manage potential API issues, such as network errors, authentication failures, or rate limiting.30

  4. Writing Summaries Back to Files: The script will be enhanced to be more useful. Instead of just printing the summary, it will be modified to write the summary back into the original Markdown file. A good practice is to add it to the YAML frontmatter under a summary: key or in a dedicated ## AI Summary section at the end of the file.

  5. Exploring OpenRouter Parameters: The OpenRouter API offers numerous parameters to control model behavior, such as temperature, max_tokens, and top_p.30 This module involves experimenting with these parameters in the Python script to observe their effect on the quality, length, and creativity of the generated summaries, allowing for fine-tuning of the AI's output.

Modules 51-55: Specialized Models with Hugging Face

While API gateways are excellent for general-purpose tasks, some tasks benefit from specialized, fine-tuned models. Hugging Face is the leading platform for accessing these models.32

  1. Introduction to the Hugging Face Hub and Transformers Library: This module provides an overview of the Hugging Face ecosystem. The Hugging Face Hub will be explored to find models specifically fine-tuned for summarization. The transformers Python library, which provides a high-level API for using these models, will be installed.32

  2. Implementing the Summarization Pipeline: The transformers library offers a pipeline abstraction that simplifies the process of using a model for a specific task.34 A new Python script will be created that initializes a
    summarization pipeline, specifying a well-regarded model like facebook/bart-large-cnn.32

  3. Script 4: Hugging Face Summarizer: This script will use the initialized pipeline to summarize a piece of text. The code is often simpler than a direct API call:
    Python
    from transformers import pipeline

    # Load the summarization pipeline with a specific model
    summarizer = pipeline("summarization", model="facebook/bart-large-cnn")

    ARTICLE = """ Your long text content here... """
    summary = summarizer(ARTICLE, max_length=150, min_length=40, do_sample=False)
    print(summary)

    This script will be tested on the same notes used in the OpenRouter module to compare results.32

  4. Comparing General vs. Specialized Models: This module involves a qualitative analysis comparing the summaries generated by the general-purpose model via OpenRouter and the specialized BART model from Hugging Face. The comparison will focus on aspects like factual accuracy, coherence, conciseness, and relevance to the source text. This provides a practical understanding of the trade-offs between using large, general models and smaller, task-specific ones.

  5. Integrating Hugging Face into the Workflow: The Hugging Face summarizer script will be integrated into the existing PKM workflow. It will be adapted to read from and write to files, just like the OpenRouter script, making it a viable alternative for the summarization task within the broader system.

Modules 56-60: Developing a Tiered AI Strategy

This section synthesizes the experiences from the previous modules into a coherent, strategic framework for using AI. Instead of treating each AI service as an isolated tool, the system will be designed to use them as a portfolio of resources, deployed intelligently based on the task's requirements.

  1. Defining the Tiers: Cost, Speed, Privacy, Capability: The AI resources available (OpenRouter, Hugging Face, and soon, local models via Ollama) will be categorized into tiers. For example:
    • Tier 1 (Local/Fast): Local Ollama models for low-cost, private, and fast tasks like simple text formatting or brainstorming.
    • Tier 2 (Specialized/Efficient): Hugging Face models for specific, well-defined tasks like summarization where a fine-tuned model excels.
    • Tier 3 (Powerful/Cloud): State-of-the-art models via OpenRouter for complex reasoning, high-quality content generation, or tasks requiring the largest context windows.
  2. Building a Python "Router" Function: A Python function or class will be created to encapsulate this tiered logic. This AIManager will have a method like process_text(task_type, text, priority). Based on the task_type (e.g., 'summarize', 'generate_questions') and priority, this function will decide which AI service and model to call.
  3. Implementing the Routing Logic: The AIManager will be implemented. For a 'summarize' task, it might default to the Hugging Face pipeline. For a 'brainstorm' task, it might use a local Ollama model. For a high-priority 'analyze_complex_document' task, it would route the request to a top-tier model through OpenRouter. This elevates the system from making simple API calls to making intelligent, resource-aware decisions.
  4. Creating a Reusable AI Toolkit: The AIManager and its related functions will be organized into a reusable Python module within the /scripts directory. This toolkit will be imported by all future automation scripts, ensuring that the tiered AI strategy is applied consistently across the entire PKM system.
  5. Formalizing the Model Selection Framework: The decision-making logic will be documented in a table. This framework serves as a quick reference for choosing the right tool for any given knowledge work task, moving from a reactive "what can this model do?" mindset to a proactive "what is the best model for this job?" approach.
TaskRecommended Model(s) / PlatformRationaleTier
Quick Drafting & Brainstormingollama/llama3 or ollama/phi-2Local, fast, private, and no cost per token. Ideal for iterative and creative tasks.1 (Local)
High-Quality SummarizationHugging Face (facebook/bart-large-cnn)Fine-tuned specifically for summarization, providing concise and factually accurate output.2 (Specialized)
Fact Extraction & Data StructuringOpenRouter (google/gemini-2.5-pro)Excellent at following complex instructions and outputting structured data like JSON.3 (Cloud)
Complex Reasoning & AnalysisOpenRouter (anthropic/claude-3.5-sonnet)Top-tier reasoning capabilities and large context window for analyzing dense documents.3 (Cloud)
Creative Writing & RephrasingOpenRouter (mistralai/mistral-large)Known for its strong performance in creative and stylistic writing tasks.3 (Cloud)

Phase IV: Hyper-Automation and Advanced Workflows (Modules 61-80)

Focus: Creating proactive, fully automated pipelines that require minimal manual intervention. This phase builds the "intelligent nervous system" of the PKM.

Modules 61-70: Advanced GitHub Actions Workflows

This section focuses on creating a sophisticated, multi-stage GitHub Action that fully automates the process of content enrichment, connecting the file system, Python scripts, AI models, and the deployment pipeline.

  1. Designing the "Content Enrichment" Workflow: A new, more advanced GitHub Actions workflow will be designed. The goal is to create a system that automatically processes a new note, enriches it with AI-generated content, and deploys the result without any manual steps.
  2. Triggering Workflows with Path Filters and Tags: The workflow will be configured to trigger conditionally. It will run on pushes to the main branch but only when files in the /notes directory are modified. A convention will be established where adding a specific tag, like #summarize, to a note's frontmatter signals the workflow to process that specific file.
  3. Workflow Step: Identifying Target Files: The first step in the Action's job will be to identify which files have been changed in the latest commit and need processing. A simple shell script or a dedicated GitHub Action can be used to get the list of modified files.
  4. Workflow Step: Running the AI Python Script: The workflow will then set up the Python environment and run the AIManager script developed in Phase III. The script will be called with the path to the modified file as an argument.
  5. Workflow Step: Committing Changes Back to the Repository: After the Python script runs and modifies the note file (e.g., by adding a summary), the GitHub Action must commit this change back to the repository. This requires configuring Git within the action, setting a user and email, and using git commit and git push. A special commit message like "chore(AI): Add summary to [filename]" will be used to denote automated changes.
  6. Handling Recursive Workflow Triggers: A critical challenge in this setup is that the workflow pushes a commit, which would normally trigger the workflow again, creating an infinite loop. This will be prevented by adding a condition to the commit step or the workflow trigger to ignore commits made by the Actions bot itself (e.g., by checking the commit message).
  7. Chaining Workflows: Instead of putting everything in one massive file, the content enrichment workflow will be configured to trigger the existing mdBook deployment workflow upon its successful completion. This can be done using the workflow_run event or by using a reusable "callable" workflow, which is a more modern approach.
  8. Adding an Issue Commenting Step: To provide feedback, a final step will be added to the workflow. Using an action like peter-evans/create-or-update-comment, the workflow will find the corresponding GitHub Issue for the topic and post a comment indicating that the note has been automatically updated and a new version has been deployed, including a link to the published page.
  9. Full End-to-End Test: A full test of the pipeline will be conducted. A new note will be created locally, tagged for summarization, and pushed to GitHub. The process will be monitored in the GitHub Actions tab, from the initial trigger to the AI processing, the commit back, the mdBook deployment, and the final comment on the issue.
  10. Refactoring for Reusability: The workflow will be refactored to make it more modular. The Python script execution and the mdBook deployment steps will be broken into separate, reusable composite actions or callable workflows, making the main workflow file cleaner and easier to maintain.7

Modules 71-75: Local LLMs with Ollama

This section introduces local large language models using Ollama, adding a powerful, private, and cost-effective tier to the AI strategy.35

  1. Installing and Configuring Ollama: Ollama will be installed on the local machine. The command-line interface will be used to pull down a versatile, medium-sized model like Llama 3 (ollama pull llama3) or a smaller, efficient model like Phi-2 (ollama pull phi-2).35
  2. Interacting with Local Models via CLI and API: The first interactions will be through the command line using ollama run llama3. This provides a feel for the model's performance and personality. Subsequently, the Ollama REST API, which runs locally on port 11434, will be explored. A tool like curl or Postman will be used to send requests to the API, demonstrating how to interact with the local model programmatically.36
  3. Creating a Custom Model with a Modelfile: To tailor a model for specific PKM tasks, a Modelfile will be created.37 This file defines a custom model based on a parent model (e.g.,
    FROM llama3). It will include a SYSTEM prompt to give the model a specific persona, such as a "Socratic Inquisitor" whose role is to respond to any text by generating three probing questions to deepen understanding. Parameters like temperature can also be set to control creativity.38
  4. Building and Running the Custom Model: The ollama create command will be used to build the custom model from the Modelfile, giving it a unique name (e.g., socratic-inquisitor). This new model will then be available to run via ollama run socratic-inquisitor and through the API.37
  5. Integrating Ollama into the Python AI Toolkit: The AIManager Python module will be updated to include Ollama as a new AI provider. A new function will be added that makes API calls to the local Ollama server. The routing logic will be updated to use the local model for specific tasks, such as brainstorming or generating questions, officially adding the "Tier 1 (Local)" capability to the system.36

Modules 76-80: Containerization with Docker

To ensure the PKM system's environment is consistent, portable, and reproducible, this section introduces containerization using Docker. This brings professional DevOps practices to the personal project.

  1. Introduction to Docker Concepts: The core concepts of Docker will be reviewed: images, containers, Dockerfiles, and volumes. The benefits of containerization for creating isolated and predictable environments will be discussed.
  2. Running Ollama in a Docker Container: As a first practical step, instead of running Ollama directly on the host machine, it will be run inside a Docker container using the official ollama/ollama image.35 This involves running the container, mapping the necessary ports, and using a volume to persist the downloaded models, ensuring they are not lost when the container stops.
  3. Writing a Dockerfile for the Python Scripts: A Dockerfile will be written for the PKM's Python automation tools. This file will define a custom image that:
    a. Starts from a base Python image.
    b. Copies the requirements.txt file and installs the dependencies.
    c. Copies the /scripts directory into the image.
    d. Sets up any necessary environment variables.
  4. Building and Running the Custom Python Container: The docker build command will be used to create an image from the Dockerfile. Then, docker run will be used to start a container from this image and execute one of the automation scripts, demonstrating that the entire toolchain can run in a self-contained environment.
  5. Exploring Other Self-Hosted PKM Tools: Docker makes it easy to experiment with other open-source tools. This module involves exploring the Docker images for other self-hosted PKM platforms like Memos or Siyuan.39 By running these tools locally in containers, new ideas and features can be discovered and potentially incorporated into the custom PKM system, all without polluting the host machine with new dependencies.

Phase V: Frontier Exploration and Custom Tooling (Modules 81-100)

Focus: Pushing the boundaries of PKM by building high-performance, custom components and exploring next-generation AI platforms.

Modules 81-90: High-Performance PKM with Rust

This section directly addresses the "impedance mismatch" problem identified in Phase I by building a custom, high-performance command-line utility in Rust. This provides a tangible, valuable project that motivates learning a new, more complex language and demonstrates a clear progression in technical capability.

  1. Setting up the Rust Development Environment: The Rust toolchain, including rustup and cargo, will be installed. A new binary crate will be created using cargo new foam-link-converter. The basics of the Rust language will be explored, focusing on concepts relevant to this project: file system operations, string manipulation, and error handling.
  2. Designing the Link Conversion Utility: The command-line tool's logic will be designed. It will need to:
    a. Accept a directory path as a command-line argument.
    b. Recursively walk through the directory to find all .md files.
    c. For each file, read its content into a string.
    d. Use regular expressions to find all instances of Foam's [[wikilink]] syntax.
    e. For each found wikilink, determine the correct relative path to the target file.
    f. Replace the [[wikilink]] with a standard Markdown link ([wikilink](./path/to/file.md)).
    g. Write the modified content back to the file.
  3. Implementing File System Traversal in Rust: The first part of the implementation will focus on safely and efficiently traversing the notes directory. Rust libraries like walkdir will be used for this purpose.
  4. Parsing and Replacing Links with Regex: Rust's powerful regex crate will be used to implement the core link-finding and replacement logic. This module will focus on crafting a robust regular expression that can handle simple links, aliases, and section links.
  5. Handling Edge Cases and Path Logic: A simple replacement is not enough. The tool must be intelligent. For a link like [[my-note]], the tool needs to find the file my-note.md within the directory structure and calculate the correct relative path from the source file to the target file. This involves path manipulation using Rust's standard library.
  6. Compiling for Performance: The Rust code will be compiled in release mode (cargo build --release). The performance of this compiled binary will be compared to a hypothetical Python script performing the same task, highlighting the significant speed advantage of a compiled language like Rust for I/O- and CPU-intensive tasks. This provides a concrete demonstration of moving up the "performance ladder" from interpreted to compiled languages.
  7. Integrating the Rust Tool into the GitHub Action: The compiled binary will be checked into the repository or built as part of the CI process. The main GitHub Actions workflow will be modified to run this custom utility as a build step before mdbook build is called. This completely automates the solution to the wikilink problem.
  8. Exploring Other Rust-Based PKM Tools: To gain further inspiration from the Rust ecosystem, notable open-source PKM tools written in Rust, such as AppFlowy and Joplin, will be reviewed.41 Examining their architecture and feature sets can provide ideas for future enhancements to the custom system.
  9. Publishing the Crate (Optional): As an extension, the foam-link-converter utility can be published to crates.io, Rust's public package registry. This provides experience with the full lifecycle of creating and sharing an open-source tool.
  10. Finalizing the Automated Linking Workflow: The end-to-end workflow is now complete. A user can write notes in VSCode using fluid [[wikilinks]], push the changes to GitHub, and the automated pipeline will use a custom-built, high-performance Rust utility to seamlessly convert the links for publication with mdBook. This represents a significant engineering achievement within the PKM project.

Modules 91-95: Exploring the Modular Platform (Mojo & MAX)

This section ventures into the cutting edge of AI infrastructure, exploring the Modular Platform to understand how to achieve state-of-the-art performance for AI tasks.42

  1. Introduction to Modular, Mojo, and MAX: The Modular ecosystem will be introduced. Mojo is a programming language that combines the usability of Python with the performance of C and Rust, designed specifically for AI developers.43 MAX is Modular's suite of AI libraries and tools for high-performance inference.45
  2. Installing the Modular SDK: The Modular SDK will be installed, providing access to the Mojo compiler and MAX tools. The native VSCode extension for Mojo will also be installed to get syntax highlighting and language support.42
  3. Writing "Hello World" in Mojo: The first Mojo program will be written and compiled. This will introduce Mojo's syntax, which is a superset of Python, and concepts like strong typing with var and fn for function definitions.44
  4. Running a Pre-Optimized Model with MAX Serving: The power of the MAX platform will be demonstrated by running a pre-optimized model from the Modular model repository. Using the max serve command, an OpenAI-compatible API endpoint will be started locally, serving a model like Llama 3.45 The performance (tokens per second) of this endpoint will be observed and compared to other inference methods, showcasing the benefits of Modular's optimizations.43
  5. Experimenting with a Mojo Script: A simple Mojo script will be written to interact with the MAX-served model. This provides a glimpse into how Mojo can be used to write the high-performance "glue code" for AI applications, bridging the gap between Python's ease of use and the need for speed in production AI systems.43

Modules 96-100: Capstone Project - The "Topic Delver" Agent

This final project synthesizes all the skills and components developed over the previous 95 days into a single, powerful, and fully automated "agent" that actively assists in the knowledge exploration process.

  1. Designing the "Topic Delver" Agent Workflow: A master GitHub Action will be designed. This workflow will trigger when a GitHub Issue on the "Topic Exploration" project board is moved into the "Researching" column. This project management action becomes the starting signal for the automated agent.1
  2. Step 1: Initial Information Gathering (Python + OpenRouter): The workflow will trigger a Python script. This script will take the title of the GitHub Issue as input. It will use the OpenRouter API to query a powerful model, instructing it to perform a simulated web search to find 3-5 key articles, videos, or papers related to the topic.23
  3. Step 2: Generating Foundational Questions (Python + Ollama): The script will then take the gathered resources and the issue summary and pass them to the custom "socratic-inquisitor" model running locally via Ollama. The model's task is to generate a list of 5-10 foundational questions that should be answered to gain a deep understanding of the topic.35
  4. Step 3: Creating the "Topic Hub" Note: The Python script will then create a new Markdown file in the /notes directory. The filename will be based on the issue title. This file will be pre-populated using a template that includes the list of resources gathered by OpenRouter and the foundational questions generated by Ollama.
  5. Step 4: Finalizing and Notifying (Rust, mdBook, GitHub API): The workflow will then execute the custom Rust foam-link-converter utility to ensure all links are correct. It will commit the new note file to the repository, which in turn triggers the mdBook deployment workflow. As a final step, the workflow will use the GitHub API to post a comment back to the original Issue, stating: "The Topic Hub has been created. You can view the note here:," completing the automated loop from task management to knowledge creation. This capstone project exemplifies a truly AI-augmented PKM system, where the system itself becomes an active partner in the process of learning and exploration.

Works cited

  1. Automating Projects using Actions - GitHub Docs, accessed September 1, 2025, https://docs.github.com/en/issues/planning-and-tracking-with-projects/automating-your-project/automating-projects-using-actions
  2. Planning and tracking with Projects - GitHub Docs, accessed September 1, 2025, https://docs.github.com/en/issues/planning-and-tracking-with-projects
  3. GitHub Issues · Project planning for developers, accessed September 1, 2025, https://github.com/features/issues
  4. Using GitHub issues to manage my tasks because I got tired of all the markdown files. : r/ClaudeAI - Reddit, accessed September 1, 2025, https://www.reddit.com/r/ClaudeAI/comments/1mozlq0/using_github_issues_to_manage_my_tasks_because_i/
  5. About Projects - GitHub Docs, accessed September 1, 2025, https://docs.github.com/issues/planning-and-tracking-with-projects/learning-about-projects/about-projects
  6. kamranahmedse/developer-roadmap: Interactive roadmaps, guides and other educational content to help developers grow in their careers. - GitHub, accessed September 1, 2025, https://github.com/kamranahmedse/developer-roadmap
  7. I saved 10+ of repetitive manual steps using just 4 GitHub Actions workflows - Reddit, accessed September 1, 2025, https://www.reddit.com/r/devops/comments/1jbajbr/i_saved_10_of_repetitive_manual_steps_using_just/
  8. A personal knowledge management and sharing system for VSCode - Foam, accessed September 1, 2025, https://foambubble.github.io/foam/
  9. foambubble/foam: A personal knowledge management and sharing system for VSCode - GitHub, accessed September 1, 2025, https://github.com/foambubble/foam
  10. Foam - Visual Studio Marketplace, accessed September 1, 2025, https://marketplace.visualstudio.com/items?itemName=foam.foam-vscode
  11. Recommended Extensions | Foam, accessed September 1, 2025, https://foam-template-gatsby-kb.vercel.app/recommended-extensions
  12. Recommended Extensions - Foam, accessed September 1, 2025, https://foambubble.github.io/foam/user/getting-started/recommended-extensions.html
  13. Visual Studio Code Extensions - thecrumb, accessed September 1, 2025, https://www.thecrumb.com/posts/2022-12-21-my-vscode-extensions/
  14. Introduction - mdBook Documentation, accessed September 1, 2025, https://rust-lang.github.io/mdBook/
  15. Renderers - mdBook Documentation - GitHub Pages, accessed September 1, 2025, https://rust-lang.github.io/mdBook/format/configuration/renderers.html
  16. Continuous Integration - mdBook Documentation - GitHub Pages, accessed September 1, 2025, https://rust-lang.github.io/mdBook/continuous-integration.html
  17. Creating Your First CI/CD Pipeline Using GitHub Actions | by Brandon Kindred - Medium, accessed September 1, 2025, https://brandonkindred.medium.com/creating-your-first-ci-cd-pipeline-using-github-actions-81c668008582
  18. peaceiris/actions-gh-pages: GitHub Actions for GitHub Pages Deploy static files and publish your site easily. Static-Site-Generators-friendly., accessed September 1, 2025, https://github.com/peaceiris/actions-gh-pages
  19. Step by step to publish mdBook in gh-pages · Issue #1803 - GitHub, accessed September 1, 2025, https://github.com/rust-lang/mdBook/issues/1803
  20. How to build mdBook with Github Actions | by katopz | Medium - Level Up Coding, accessed September 1, 2025, https://levelup.gitconnected.com/how-to-build-mdbook-with-github-actions-eb9899e55d7e
  21. Beginner's Guide To Python Automation Scripts (With Code ..., accessed September 1, 2025, https://zerotomastery.io/blog/python-automation-scripts-beginners-guide/
  22. 19 Super-Useful Python Scripts to Automate Your Daily Tasks - Index.dev, accessed September 1, 2025, https://www.index.dev/blog/python-automation-scripts
  23. OpenRouter: A unified interface for LLMs | by Dagang Wei | Medium, accessed September 1, 2025, https://medium.com/@weidagang/openrouter-a-unified-interface-for-llms-eda4742a8aa4
  24. Community Providers: OpenRouter - AI SDK, accessed September 1, 2025, https://ai-sdk.dev/providers/community-providers/openrouter
  25. Models - OpenRouter, accessed September 1, 2025, https://openrouter.ai/models
  26. Google AI Studio | Gemini API | Google AI for Developers, accessed September 1, 2025, https://ai.google.dev/aistudio
  27. Google AI Studio, accessed September 1, 2025, https://aistudio.google.com/
  28. Google AI Studio quickstart - Gemini API, accessed September 1, 2025, https://ai.google.dev/gemini-api/docs/ai-studio-quickstart
  29. Google AI Studio for Beginners - YouTube, accessed September 1, 2025, https://www.youtube.com/watch?v=IHOJUJjZbzc
  30. OpenRouter API Reference | Complete API Documentation ..., accessed September 1, 2025, https://openrouter.ai/docs/api-reference/overview
  31. Completion | OpenRouter | Documentation, accessed September 1, 2025, https://openrouter.ai/docs/api-reference/completion
  32. Summarizing Text Using Hugging Face's BART Model - DEV Community, accessed September 1, 2025, https://dev.to/dm8ry/summarizing-text-using-hugging-faces-bart-model-14p5
  33. How to Build A Text Summarizer Using Huggingface Transformers - freeCodeCamp, accessed September 1, 2025, https://www.freecodecamp.org/news/how-to-build-a-text-summarizer-using-huggingface-transformers/
  34. Pipelines - Hugging Face, accessed September 1, 2025, https://huggingface.co/docs/transformers/main_classes/pipelines
  35. How to Run LLMs Locally with Ollama - Medium, accessed September 1, 2025, https://medium.com/cyberark-engineering/how-to-run-llms-locally-with-ollama-cb00fa55d5de
  36. Running LLM Locally: A Beginner's Guide to Using Ollama | by Arun Patidar | Medium, accessed September 1, 2025, https://medium.com/@arunpatidar26/running-llm-locally-a-beginners-guide-to-using-ollama-8ea296747505
  37. ollama/ollama: Get up and running with OpenAI gpt-oss ... - GitHub, accessed September 1, 2025, https://github.com/ollama/ollama
  38. Learn Ollama in 15 Minutes - Run LLM Models Locally for FREE - YouTube, accessed September 1, 2025, https://www.youtube.com/watch?v=UtSSMs6ObqY
  39. usememos/memos: A modern, open-source, self-hosted knowledge management and note-taking platform designed for privacy-conscious users and organizations. - GitHub, accessed September 1, 2025, https://github.com/usememos/memos
  40. siyuan-note/siyuan: A privacy-first, self-hosted, fully open source personal knowledge management software, written in typescript and golang. - GitHub, accessed September 1, 2025, https://github.com/siyuan-note/siyuan
  41. Best Open Source Personal Knowledge ... - OpenAlternative, accessed September 1, 2025, https://openalternative.co/categories/personal-knowledge-management-pkm/using/rust
  42. Modular: A Fast, Scalable Gen AI Inference Platform, accessed September 1, 2025, https://www.modular.com/
  43. Modular Documentation | Modular, accessed September 1, 2025, https://docs.modular.com/
  44. Get started with Mojo - Modular docs, accessed September 1, 2025, https://docs.modular.com/mojo/manual/get-started/
  45. The Modular Platform (includes MAX & Mojo) - GitHub, accessed September 1, 2025, https://github.com/modular/modular