Nostr RAG (Retrieval-Augmented Generation)

This module provides Retrieval-Augmented Generation (RAG) functionality integrated with the Nostr network. It enables agents to store Nostr events, retrieve them, and generate contextually relevant information.

pydantic model agentstr.nostr_rag.Author[source]

Bases: BaseModel

Show JSON schema
{
   "title": "Author",
   "type": "object",
   "properties": {
      "pubkey": {
         "title": "Pubkey",
         "type": "string"
      },
      "name": {
         "anyOf": [
            {
               "type": "string"
            },
            {
               "type": "null"
            }
         ],
         "default": null,
         "title": "Name"
      }
   },
   "required": [
      "pubkey"
   ]
}

Fields:
field pubkey: str [Required]
field name: str | None = None
class agentstr.nostr_rag.NostrRAG(nostr_client: NostrClient | None = None, vector_store=None, relays: list[str] | None = None, private_key: str | None = None, nwc_str: str | None = None, embeddings=None, llm=None, llm_model_name=None, llm_base_url=None, llm_api_key=None, known_authors: list[Author] | None = None)[source]

Bases: object

Retrieval-Augmented Generation (RAG) system for Nostr events.

This class fetches Nostr events, builds a vector store knowledge base, and enables semantic search and question answering over the indexed content.

__init__(nostr_client: NostrClient | None = None, vector_store=None, relays: list[str] | None = None, private_key: str | None = None, nwc_str: str | None = None, embeddings=None, llm=None, llm_model_name=None, llm_base_url=None, llm_api_key=None, known_authors: list[Author] | None = None)[source]

Initialize the NostrRAG system.

Parameters:
  • nostr_client – An existing NostrClient instance (optional).

  • vector_store – An existing vector store instance (optional).

  • relays – List of Nostr relay URLs (if no client provided).

  • private_key – Nostr private key in ‘nsec’ format (if no client provided).

  • nwc_str – Nostr Wallet Connect string for payments (optional).

  • embeddings – Embedding model for vectorizing documents (defaults to FakeEmbeddings with size 256).

  • llm – Language model (optional).

  • llm_model_name – Name of the language model to use (optional).

  • llm_base_url – Base URL for the language model (optional).

  • llm_api_key – API key for the language model (optional).

Raises:

ImportError – If LangChain is not installed.

async build_knowledge_base(question: str, limit: int = 10, query_type: Literal['hashtags', 'authors'] = 'hashtags') list[dict][source]

Build a knowledge base from Nostr events relevant to the question.

Parameters:
  • question – The user’s question to guide hashtag selection

  • limit – Maximum number of posts to retrieve

Returns:

List of retrieved events

async retrieve(question: str, limit: int = 5, query_type: Literal['hashtags', 'authors'] = 'hashtags') list[Document][source]

Retrieve relevant documents from the knowledge base.

Parameters:
  • question – The user’s question

  • limit – Maximum number of documents to retrieve

  • query_type – Type of query to use (hashtags or authors)

Returns:

List of retrieved documents

async query(question: str, limit: int = 5, query_type: Literal['hashtags', 'authors'] = 'hashtags') str[source]

Ask a question using the knowledge base.

Parameters:
  • question – The user’s question

  • limit – Number of documents to retrieve for context

  • query_type – Type of query to use (hashtags or authors)

Returns:

The generated response