Private LLMs for GitHub Actions
December 23, 2024

Private LLMs for GitHub Actions

GitHub has been an enthusiastic adopter of artificial intelligence, and its Copilot platform enables developers to understand, code and debug software. However, using Copilot in an automated way within a GitHub Actions workflow is not easy.

second brain is a new operation that supports the use of LLM in GitHub Actions workflows. It works by deploying become As a Docker container hosting LLM, Ollama is then called through a custom CLI, which automatically builds a retrieval augmentation generation (RAG) prompt process that embeds git commit details.

SecondBrain works as follows:

  1. You pass in the git commit SHA you want to query
  2. The GitHub token you pass in is used to query the GitHub REST API to get commit details
  3. You define a hint to pass to the LLM, which can access the digest of the Git commit referenced by the SHA
  4. SecondBrain queries GitHub to get the details of the commit associated with the SHA, summarizes the commit differences, puts the summary into a hint context, and then passes the context and hint to LLM.

Here is a sample workflow YAML file that produces a summary of each commit to the master branch:

name: Summarize the commit

on:
  workflow_dispatch:
  push:
    branches:
      - main

jobs:
  summarize:
    runs-on: ubuntu-latest
    steps:
      - name: SecondBrainAction
        id: secondbrain
        uses: mcasperson/SecondBrain@main
        with:
            prompt: 'Provide a summary of the changes from the git diffs. Use plain language. You will be penalized for offering code suggestions. You will be penalized for sounding excited about the changes.'
            token: ${{ secrets.GITHUB_TOKEN }}
            owner: ${{ github.repository_owner }}
            repo: ${{ github.event.repository.name }}
            sha: ${{ github.sha }}
      - name: Get the diff summary
        env:
            RESPONSE: ${{ steps.secondbrain.outputs.response }}
        run: echo "$RESPONSE"
Enter full screen mode

Exit full screen mode

The output of Get the diff summary The steps are shown in the figure below:

Here is a summary of the changes:

The README.md file now indicates that the sha input is required and cannot have a default value [2]. The action.yml file has also removed the option to provide a default value for the sha input [1].

- [1]: The action.yml file was also updated to remove the default value for the sha input (52b40e59684d17e5fddc95c4dba3cdc82e4f7b7d)
- [2]: The README.md file was updated to add a note that the sha input is mandatory and has no default value (52b40e59684d17e5fddc95c4dba3cdc82e4f7b7d)

Links:
- [52b40e59684d17e5fddc95c4dba3cdc82e4f7b7d](https://github.com/mcasperson/SecondBrain/commit/52b40e59684d17e5fddc95c4dba3cdc82e4f7b7d)
Enter full screen mode

Exit full screen mode

if you click link From the report, you will see the commits that produced this summary. The description of the submission is accurate and easier to read than checking the differences directly.

It’s important to note that this operation never calls any external service other than GitHub itself. There is no need to host your own LLM infrastructure as the entire process is handled by Ollama’s local private LLM exposed.

try it Let me know if this works.

2024-12-23 00:15:34

Leave a Reply

Your email address will not be published. Required fields are marked *