🧭 Update ROADMAP and add configurable logging; fix autochat syntax

- Refresh ROADMAP.md to reflect current Forgejo milestones and open feature issues\n- Add .env logging controls and update src/logger.py to honor them (console/file/error toggles, per-handler levels)\n- Fix indentation bug in src/autochat.py so emoji reactions work\n- Truncate and standardize logs; document logging usage in .env
This commit is contained in:
milo 2025-09-19 14:44:36 -04:00
parent 72e088c0ad
commit 2d5c636b9d
5 changed files with 249 additions and 15 deletions

24
.env
View file

@ -1,11 +1,31 @@
DISCORD_TOKEN=MTM2OTc3NDY4OTYzNDg4MTU4Ng.G9Nrgz.akHoOO9SrXCDwiOCI3BUXfdR4bpSNb9zrVx9UI
# This is using the TailScale IP
OLLAMA_API=http://100.107.221.83:11434/api/
OLLAMA_API=http://192.168.0.100:11434/
MODEL_NAME=gemma3:12b
CHANNEL_ID=1370420592360161393
CHANNEL_ID=1380999713272238151
SHOW_THINKING_BLOCKS=false
DEBUG_MODE=true
AUTOREPLY_ENABLED=true
# ---------------------------
# Logging configuration
# - LOG_LEVEL: global base level (INFO recommended)
# - LOG_CONSOLE: enable console logs (true/false)
# - LOG_CONSOLE_LEVEL: level for console output (INFO, DEBUG, etc.)
# - LOG_CONSOLE_TO_STDOUT: if true, console logs go to STDOUT (useful for container logging)
# - LOG_TO_FILE: enable writing to a rotating file
# - LOG_FILE_LEVEL: level stored in the file (set to DEBUG to capture full LLM payloads)
# - LOG_FILE: filename for rotated logs
# - LOG_ERROR_FILE: enable writing errors to a separate file (filename is derived)
# Example: to capture full prompts/responses in logs, set LOG_FILE_LEVEL=DEBUG
LOG_LEVEL=INFO
LOG_CONSOLE=true
LOG_CONSOLE_LEVEL=INFO
LOG_CONSOLE_TO_STDOUT=true
LOG_TO_FILE=true
LOG_FILE_LEVEL=DEBUG
LOG_FILE=bot.log
LOG_ERROR_FILE=true
# ---------------------------
# Cooldown in seconds
AUTOREPLY_COOLDOWN=0
# Used for Codex to reach my repo on Forgejo

View file

@ -1,5 +1,76 @@
# 📍 DeltaBot Development Roadmap
This roadmap is an actionable summary of current milestones and feature issues. It mirrors the milestones on the Forgejo repository and links to issues (local issue numbers).
---
## 🚀 Alpha Build Ready — Core features (Milestone)
Focus: deliver core capabilities so Delta is useful and reliably persona-driven.
Open high-priority items (Alpha):
- #37 — 🧠 LoRA Support — improve model fine-tuning/load-time behavior
- #36 — Memory — persistence for context beyond immediate messages
- #26 — Web usage — optional web-enabled features (codex/integration)
- #25 — 🔁 Enable Modelfile Support — support alternate model packaging
- #24 — 💸 Set up Monetization — billing/paid features plumbing
- #22 — 📡 Remote Admin Panel — admin web UI / remote control
- #17 — 🖼️ Image Generation — generate images via local models
- #16 — 👀 Image Interpretation — describe and analyze posted images
- #10 — Post "Reply" — post as reply-to instead of plain message
Closed/implemented Alpha items: #8, #9, #15, #30, #31
Suggested next steps for Alpha:
- Break large items (Remote Admin Panel, Image Generation) into sub-tasks.
- Prioritize Memory (#36) and Post "Reply" (#10) to stabilize context handling.
- Add clear acceptance criteria to each open Alpha issue.
---
## 🧪 Beta Build — Polishing & optional features (Milestone)
Focus: polish, scaling, and user-experience improvements.
Open Beta items:
- #35 — Respect token budget (~1000 tokens max)
- #34 — 📌 Pin system messages
- #33 — 🧠 Add memory persistence (overlaps with Alpha Memory)
- #27 — Multi model support — support local switching/multiple endpoints
- #18 — 🎭 Multi Personality — multiple personas selectable per server/channel
Suggested next steps for Beta:
- Decide which Alpha items must land before Beta starts.
- Resolve overlaps (Memory appears in both Alpha and Beta) and consolidate the plan.
---
## 📦 Backlog / Unmilestoned Features
Lower-priority, exploratory, or undecided items to consider batching into future milestones.
- #23 — 📢 Broadcast / Announcement Mode
- #21 — 📈 Analytics Dashboard
- #20 — Shopper Assist
- #19 — 🚨 Content-aware moderation assist
- #14 — 📊 Engagement-based adjustment
- #13 — Context-aware scheduling
- #12 — Bot can be given some power to change the server a bit
- #11 — 🗓️ Scheduled specific times/dates
Suggested backlog housekeeping:
- Group these into thematic milestones: e.g. "Admin Tools", "Analytics", "Media & Vision".
- Add rough estimates (S/M/L) and owners for each item.
---
## How to use this roadmap
- Update issue bodies with acceptance criteria and subtask checklists.
- Assign owners and estimate effort for each Alpha item.
- Use the Forgejo milestone due dates to prioritize development sprints.
---
_Generated: Sep 19, 2025 — synchronized with repository milestones and open feature issues._
# 📍 DeltaBot Development Roadmap
This roadmap tracks the major phases of DeltaBot — from MVP to full AI companion chaos. ✅ = complete
---

1
ROADMAP.md.bak Normal file
View file

@ -0,0 +1 @@
Backup of previous ROADMAP.md

View file

@ -89,7 +89,7 @@ async def maybe_react_to_message(message, persona):
roll = random.random()
if roll > EMOJI_REACTION_CHANCE:
logger.info(f"🎲 Reaction skipped (chance {EMOJI_REACTION_CHANCE:.2f}, roll {roll:.2f})")
logger.debug("🎲 Reaction skipped (chance %.2f, roll %.2f)", EMOJI_REACTION_CHANCE, roll)
return
try:
@ -108,10 +108,11 @@ async def maybe_react_to_message(message, persona):
)
emoji_reply = get_ai_response(prompt).strip()
logger.info(f"🎭 Emoji suggestion from LLM: {emoji_reply}")
# Log the raw emoji suggestion at DEBUG
logger.debug("🎭 Emoji suggestion from LLM: %s", emoji_reply)
# Extract valid emojis
import re
emojis = re.findall(r'[\U0001F300-\U0001F6FF\U0001F900-\U0001F9FF\U0001F1E0-\U0001F1FF]', emoji_reply)
unique_emojis = list(dict.fromkeys(emojis))[:3]

View file

@ -1,22 +1,163 @@
import logging
import os
import sys
from logging.handlers import RotatingFileHandler
def setup_logger(name: str = "bot", level=logging.INFO, log_file: str = "bot.log"):
formatter = logging.Formatter(
def setup_logger(name: str = "bot"):
"""Create a logger with rotating file handler, separate error log, and
a concise console output. Behavior is controlled by environment vars:
- LOG_LEVEL (default INFO)
- LOG_FILE (default bot.log)
- LOG_MAX_BYTES (default 5_000_000)
- LOG_BACKUP_COUNT (default 5)
If `colorlog` is installed, console output will be colorized.
"""
# Config from environment (all values can be set in .env)
level_name = os.getenv("LOG_LEVEL", "INFO").upper()
try:
base_level = getattr(logging, level_name)
except Exception:
base_level = logging.INFO
# Handlers toggle + per-handler levels
console_enabled = os.getenv("LOG_CONSOLE", "true").lower() in ("1", "true", "yes")
console_level_name = os.getenv("LOG_CONSOLE_LEVEL", level_name).upper()
console_to_stdout = os.getenv("LOG_CONSOLE_TO_STDOUT", "true").lower() in ("1", "true", "yes")
file_enabled = os.getenv("LOG_TO_FILE", "true").lower() in ("1", "true", "yes")
file_level_name = os.getenv("LOG_FILE_LEVEL", "DEBUG").upper()
log_file = os.getenv("LOG_FILE", "bot.log")
max_bytes = int(os.getenv("LOG_MAX_BYTES", 5_000_000))
backup_count = int(os.getenv("LOG_BACKUP_COUNT", 5))
error_file_enabled = os.getenv("LOG_ERROR_FILE", "true").lower() in ("1", "true", "yes")
# File formatter: include module and line number for easier debugging
file_formatter = logging.Formatter(
"[%(asctime)s] [%(levelname)s] [%(name)s:%(lineno)d] %(message)s",
"%Y-%m-%d %H:%M:%S",
)
# Console formatter: shorter. Try to use colorlog if available.
try:
import colorlog
console_formatter = colorlog.ColoredFormatter(
"%(log_color)s[%(asctime)s] [%(levelname)s]%(reset)s %(message)s",
datefmt="%Y-%m-%d %H:%M:%S",
log_colors={
"DEBUG": "cyan",
"INFO": "green",
"WARNING": "yellow",
"ERROR": "red",
"CRITICAL": "red",
},
)
except Exception:
console_formatter = logging.Formatter(
"[%(asctime)s] [%(levelname)s] %(message)s", "%Y-%m-%d %H:%M:%S"
)
stream_handler = logging.StreamHandler()
stream_handler.setFormatter(formatter)
# Handlers
# Console handler (stdout by default, configurable)
console_handler = logging.StreamHandler(sys.stdout if console_to_stdout else sys.stderr)
console_handler.setFormatter(console_formatter)
try:
console_level = getattr(logging, console_level_name)
except Exception:
console_level = base_level
console_handler.setLevel(console_level)
file_handler = logging.FileHandler(log_file, encoding='utf-8')
file_handler.setFormatter(formatter)
file_handler = RotatingFileHandler(
log_file, maxBytes=max_bytes, backupCount=backup_count, encoding="utf-8"
)
file_handler.setFormatter(file_formatter)
try:
file_level = getattr(logging, file_level_name)
except Exception:
file_level = logging.DEBUG
file_handler.setLevel(file_level)
# Separate error-only rotating file
error_log_file = os.path.splitext(log_file)[0] + ".error.log"
error_handler = RotatingFileHandler(
error_log_file, maxBytes=max_bytes, backupCount=backup_count, encoding="utf-8"
)
error_handler.setLevel(logging.ERROR)
error_handler.setFormatter(file_formatter)
logger = logging.getLogger(name)
logger.setLevel(level)
# Set logger base level to the most permissive of configured levels so handlers can filter
levels = [base_level]
if console_enabled:
levels.append(console_level)
if file_enabled:
levels.append(file_level)
# minimum numeric level means more verbose (DEBUG=10)
logger.setLevel(min(levels))
# Avoid adding duplicate handlers if logger already configured
if not logger.handlers:
logger.addHandler(stream_handler)
if console_enabled:
logger.addHandler(console_handler)
if file_enabled:
logger.addHandler(file_handler)
if error_file_enabled:
logger.addHandler(error_handler)
return logger
def generate_req_id(prefix: str = "r") -> str:
"""Generate a short request id for correlating logs."""
import uuid
return f"{prefix}{uuid.uuid4().hex[:8]}"
def mask_secret(value: str | None, head: int = 6, tail: int = 4) -> str | None:
"""Mask a secret for safe logging."""
if not value:
return value
if len(value) <= head + tail + 3:
return "***"
return f"{value[:head]}...{value[-tail:]}"
def log_llm_request(logger: logging.Logger, req_id: str, model: str, user: str | None, context_len: int | None):
logger.info("%s LLM request start model=%s user=%s context_len=%s", req_id, model, user or "-", context_len or 0)
def log_llm_payload(logger: logging.Logger, req_id: str, payload: object):
logger.debug("%s LLM full payload: %s", req_id, payload)
def log_llm_response(logger: logging.Logger, req_id: str, model: str, duration_s: float, short_text: str | None, raw: object | None = None):
logger.info("%s LLM response model=%s duration=%.3fs summary=%s", req_id, model, duration_s, (short_text or "[no text]").replace("\n", " ")[:160])
if raw is not None:
logger.debug("%s LLM raw response: %s", req_id, raw)
class SamplingFilter(logging.Filter):
"""Filter that randomly allows a fraction of records through.
Useful for noisy logs like 'reaction skipped'.
"""
def __init__(self, sample_rate: float = 0.1):
super().__init__()
try:
self.rate = float(sample_rate)
except Exception:
self.rate = 0.1
def filter(self, record: logging.LogRecord) -> bool:
import random
if self.rate >= 1.0:
return True
return random.random() < self.rate