Today's digest
# Tencent/WeKnora
**URL:** https://github.com/Tencent/WeKnora
**One-liner:** Open-source LLM knowledge platform: turn raw documents into a queryable RAG, an autonomous reasoning agent, and a self-maintaining Wiki.
**Relevance to oss-digest:** medium (65/100)
**Integration:** depend-on-it
## Summary
An LLM-powered knowledge platform that ingests documents, builds RAG, and auto-generates a wiki with agent capabilities.
## Why it's useful here
oss-digest already pulls OSS projects and uses DeepSeek for triage. WeKnora could index collected project info into a searchable knowledge base with agent-driven summarization and cross-linking.
## Suggested use
Run WeKnora as a sidecar service, use its API to ingest curated OSS project metadata, then replace current DB queries with WeKnora's RAG and wiki mode.
## Novelty / why now
Combines RAG, ReAct agent, and auto-wiki generation with multi-source ingestion (Feishu, Notion, etc.) and 20+ LLM providers. Active development by Tencent.
## Risks
License ambiguity (NOASSERTION vs MIT), large Go codebase, requires external vector DB, active development may cause breaking changes.
## Safety scan
- Risk level: **low**
- Stars: 14825 (age 295d, 50.25 stars/day)
- Last push: 0 days ago
- Contributors: 85
- License: NOASSERTION
- Postinstall hooks: none
- Suspicious patterns: none
- Notes: (none)
### Reviewer safety notes
License unclear (NOASSERTION but MIT badge in README). Requires significant infrastructure. Not audited. May have telemetry.
# supertone-inc/supertonic
**URL:** https://github.com/supertone-inc/supertonic
**One-liner:** Lightning-fast on-device multilingual TTS using ONNX, with bindings for Python, Node.js, Swift, Rust, etc.
**Relevance to oss-digest:** medium (65/100)
**Integration:** cherry-pick
## Summary
On-device multilingual TTS using ONNX, with Node.js support.
## Why it's useful here
oss-digest produces daily digests of open-source news; adding TTS would let users listen to the digest, increasing engagement and accessibility. The Node.js SDK can be integrated into Next.js API routes to generate audio for each digest item.
## Suggested use
Use supertonic's Node.js SDK to generate audio files for digest items, embed an audio player in the UI. Consider pre-generating audio during digest creation and storing in S3 or similar.
## Novelty / why now
On-device TTS supporting 31 languages, optimized for edge inference, with a Voice Builder feature.
## Risks
Node.js binding may not be production-ready; ONNX runtime native dependency may not work in serverless environments. Large model downloads (Git LFS) require caching strategy. Project primarily Swift-based; Node.js path is an example, not official SDK.
## Safety scan
- Risk level: **low**
- Stars: 3769 (age 176d, 21.41 stars/day)
- Last push: 6 days ago
- Contributors: 4
- License: MIT
- Postinstall hooks: none
- Suspicious patterns: none
- Notes: (none)
### Reviewer safety notes
Low risk: MIT license, no suspicious patterns, active development. However, Node.js binding is example-grade; production readiness unclear. Model downloads are large and require Git LFS. ONNX runtime must be available in deployment environment.
# statewright/statewright
**URL:** https://github.com/statewright/statewright
**One-liner:** State machine guardrails for AI coding agents, constraining tool access per workflow phase.
**Relevance to oss-digest:** medium (65/100)
**Integration:** cleanroom-rebuild
## Summary
State machine guardrails that control which tools your AI agent can use in each phase.
## Why it's useful here
oss-digest uses a two-stage DeepSeek pipeline to generate digests; statewright could constrain the LLM's tool usage (read-only during planning, write-only during generation) to reduce flailing and improve output quality.
## Suggested use
Study statewright's state definitions and transition guards, then cleanroom-rebuild a similar concept in Python/Next for oss-digest's agent loop.
## Novelty / why now
Repackages classic state machines as a deterministic Rust engine + MCP plugin to enforce per-phase tool restrictions on AI agents.
## Risks
Single-maintainer, no license, unproven at scale; rebuilding in Python avoids Rust compilation dependency.
## Safety scan
- Risk level: **medium**
- Stars: 188 (age 9d, 20.89 stars/day)
- Last push: 0 days ago
- Contributors: 1
- License: unknown
- Postinstall hooks: none
- Suspicious patterns: none
- Notes: single-contributor repo with notable stars
### Reviewer safety notes
Single-maintainer repo (<9 days old, 188 stars) with no license; rapid star growth may be inorganic; risk of abandonment.
# MemoriLabs/Memori
**URL:** https://github.com/MemoriLabs/Memori
**One-liner:** Memori is an LLM-agnostic memory layer that persists agent execution and conversation state, with both TypeScript and Python SDKs.
**Relevance to oss-digest:** medium (55/100)
**Integration:** cherry-pick
## Summary
Agent-native memory infrastructure for persistent state.
## Why it's useful here
oss-digest uses a two-stage DeepSeek pipeline; Memori can remember which projects the user has already seen or engaged with, improving triage and personalization.
## Suggested use
Register Memori with the LLM client to maintain a memory of previously digested projects, user preferences, and feedback to refine future digests.
## Novelty / why now
Strong LoCoMo benchmark results (81.95% accuracy at 5% of full-context tokens) and both cloud and BYODB options.
## Risks
Same cloud dependency; also may conflict with existing memory approach.
## Safety scan
- Risk level: **low**
- Stars: 14416 (age 293d, 49.20 stars/day)
- Last push: 0 days ago
- Contributors: 34
- License: NOASSERTION
- Postinstall hooks: none
- Suspicious patterns: none
- Notes: (none)
### Reviewer safety notes
License is Apache 2.0, no postinstall hooks, no secrets, low risk. However, default usage depends on Memori Cloud (SaaS) which may raise data privacy concerns. BYODB mitigates this.