Conversation
…art types and LLMClient
Introduces host-agnostic types and interfaces so @loreai/core can run
under any host (OpenCode, Pi, standalone), not just OpenCode:
Types (packages/core/src/types.ts):
- LoreMessage = LoreUserMessage | LoreAssistantMessage (discriminated on .role)
- LorePart = LoreTextPart | LoreReasoningPart | LoreToolPart | LoreGenericPart
with isTextPart(), isReasoningPart(), isToolPart() type guards
- LoreMessageWithParts = { info: LoreMessage; parts: LorePart[] }
- LLMClient interface with a single .prompt(system, user, opts?) method
What changed:
- temporal.ts + gradient.ts: import Lore types instead of @opencode-ai/sdk
- gradient.ts: use type guard functions for safe narrowing
- distillation.ts + curator.ts + search.ts: accept LLMClient instead of
OpenCode Client; removed ensureWorkerSession/workerSessions/promptWorker
(OpenCode-specific session lifecycle now lives in the adapter)
- worker.ts: trimmed to just workerSessionIDs tracking + LLMClient re-export
- @opencode-ai/sdk removed from core's devDependencies entirely
OpenCode adapter (packages/opencode/src/llm-adapter.ts):
- createOpenCodeLLMClient() implements LLMClient by wrapping the OpenCode
SDK's session.create + session.prompt with agent-not-found retry
(logic extracted from the old core worker.ts)
Tests: 350 pass (down from 358 — 10 OpenCode-specific promptWorker tests
moved to adapter, 2 worker tracking tests added). No behavior change.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Fully decouples
@loreai/corefrom the OpenCode SDK. The core package now has zero dependency on@opencode-ai/sdkor@opencode-ai/plugin— it defines its own types and interfaces.New abstractions
Lore types (
packages/core/src/types.ts)LoreMessageLoreUserMessage | LoreAssistantMessage— discriminated on.roleLorePartLoreTextPart | LoreReasoningPart | LoreToolPart | LoreGenericPart— withisTextPart(),isReasoningPart(),isToolPart()type guardsLoreMessageWithParts{ info: LoreMessage; parts: LorePart[] }— the unit that hooks operate onLLMClient.prompt(system, user, opts?)→Promise<string | null>OpenCode adapter (
packages/opencode/src/llm-adapter.ts)createOpenCodeLLMClient()implementsLLMClientby wrappingclient.session.create()+client.session.prompt()with the existing agent-not-found retry logic.What moved
promptWorker()session lifecycle →packages/opencode/src/llm-adapter.tsensureWorkerSession()per module → deleted (adapter handles session lifecycle)promptWorkertests → deleted (logic lives in adapter now)@opencode-ai/sdkdevDep removed from core'spackage.jsonZero behavior change
Why this matters
@loreai/corecan now be consumed by any host that implementsLLMClient:complete()from@mariozechner/pi-ai(next PR)fetch()to provider APIs (future)