backend.core.utils.logger module¶
Centralized Logging Configuration for FlowState.
This module provides a standardized logging infrastructure for the FlowState application. It implements structured logging with appropriate handlers, formatters, and log levels for different environments (development, production).
- The logging system supports:
Structured JSON logging for production
Human-readable console logging for development
File-based log rotation
Context-aware logging with request IDs
Performance metrics logging
Error tracking with stack traces
- Architecture:
The logger uses Python’s standard logging module with customized formatters and handlers. Log messages are enriched with contextual information such as timestamps, module names, and request IDs.
- Log Levels:
DEBUG: Detailed diagnostic information for troubleshooting
INFO: General informational messages about application state
WARNING: Potentially harmful situations that don’t prevent operation
ERROR: Error events that might still allow operation to continue
CRITICAL: Severe errors that may prevent operation
- Best Practices:
Use appropriate log levels for different types of messages
Include relevant context in log messages
Avoid logging sensitive information (passwords, tokens, PII)
Use structured logging for production environments
Log exceptions with stack traces for debugging
- Usage:
>>> from core.utils.logger import get_logger >>> logger = get_logger(__name__) >>> logger.info("Processing EEG data", extra={'user_id': 123}) >>> logger.error("Failed to connect to Muse headband", exc_info=True)
- Configuration:
Environment variables control logging behavior:
- LOG_LEVELstr
Minimum log level to output (DEBUG, INFO, WARNING, ERROR, CRITICAL) Default: INFO for production, DEBUG for development
- LOG_FORMATstr
Output format (‘json’ for structured, ‘text’ for human-readable) Default: json for production, text for development
- LOG_FILEstr
Path to log file for file-based logging Default: logs/flowstate.log
- LOG_MAX_BYTESint
Maximum size of log file before rotation (bytes) Default: 10485760 (10MB)
- LOG_BACKUP_COUNTint
Number of backup log files to keep Default: 5
See also
logging: Python standard logging module
StructuredFormatter: Custom JSON formatter for production logs
Examples
Basic logging:
>>> logger = get_logger(__name__)
>>> logger.info("Application started")
>>> logger.debug("Configuration loaded", extra={'config_file': '.env'})
Error logging with context:
>>> try:
... process_eeg_data(data)
... except Exception as e:
... logger.error(
... "EEG processing failed",
... exc_info=True,
... extra={'data_shape': data.shape, 'user_id': user.id}
... )
Performance logging:
>>> import time
>>> start = time.time()
>>> result = expensive_operation()
>>> duration = time.time() - start
>>> logger.info(
... "Operation completed",
... extra={'operation': 'expensive_operation', 'duration_ms': duration * 1000}
... )
Context manager for request logging:
>>> with log_context(request_id=request.id, user_id=user.id):
... logger.info("Processing request")
... # All logs within this context will include request_id and user_id
- Security Considerations:
Never log passwords, API keys, or authentication tokens
Sanitize user input before logging to prevent log injection
Limit logging of PII (personally identifiable information)
Use appropriate log levels to avoid excessive information disclosure
Ensure log files have restricted file permissions
- Performance Impact:
Logging overhead typically < 1ms per log statement
File I/O is asynchronous to minimize blocking
Log rotation happens in background thread
JSON formatting adds ~0.5ms overhead per message
Note
Loggers are cached by name, so calling get_logger with the same name multiple times returns the same logger instance.
- class backend.core.utils.logger.StructuredFormatter(fmt=None, datefmt=None, style='%', validate=True, *, defaults=None)[source]¶
Bases:
FormatterJSON formatter for structured logging in production.
Converts log records to JSON format with consistent structure and enriched metadata. This enables efficient log parsing and analysis by log aggregation systems like ELK, Splunk, or CloudWatch.
- None¶
- Type:
uses standard logging.Formatter attributes
- Output Format:
Each log message is formatted as a JSON object with the following fields:
{ "timestamp": "2024-11-08T15:30:45.123Z", "level": "INFO", "logger": "core.services.eeg_processor", "message": "Processing EEG data", "module": "eeg_processor", "function": "process_chunk", "line": 145, "thread": "MainThread", "process": 12345, "extra": { "user_id": 123, "request_id": "abc-123-def" } }
Example
>>> formatter = StructuredFormatter() >>> handler = logging.StreamHandler() >>> handler.setFormatter(formatter) >>> logger = logging.getLogger('test') >>> logger.addHandler(handler) >>> logger.info("Test message", extra={'key': 'value'})
Note
Exception information is automatically included when exc_info=True is passed to the logging call.
- format(record: LogRecord) str[source]¶
Format log record as JSON string.
- Parameters:
record (logging.LogRecord) – The log record to format
- Returns:
JSON-formatted log message
- Return type:
str
Note
Extra fields from the record are merged into the output JSON. Datetime values are formatted as ISO 8601 strings.
- class backend.core.utils.logger.ColoredConsoleFormatter(fmt=None, datefmt=None, style='%', validate=True, *, defaults=None)[source]¶
Bases:
FormatterColored console formatter for development.
Provides human-readable colored output for console logging during development. Different log levels are displayed in different colors for easy visual scanning.
- Colors:
DEBUG: Cyan
INFO: Green
WARNING: Yellow
ERROR: Red
CRITICAL: Red background
Example
>>> formatter = ColoredConsoleFormatter() >>> handler = logging.StreamHandler() >>> handler.setFormatter(formatter)
- COLORS = {'CRITICAL': '\x1b[41m', 'DEBUG': '\x1b[36m', 'ERROR': '\x1b[31m', 'INFO': '\x1b[32m', 'WARNING': '\x1b[33m'}¶
- RESET = '\x1b[0m'¶
- backend.core.utils.logger.setup_logging(level: str | None = None, format_type: str | None = None, log_file: str | None = None) None[source]¶
Configure application-wide logging settings.
Sets up logging handlers, formatters, and filters based on environment configuration. Should be called once at application startup.
- Parameters:
level (str, optional) – Minimum log level (DEBUG, INFO, WARNING, ERROR, CRITICAL) If not provided, reads from LOG_LEVEL environment variable
format_type (str, optional) – Log format type (‘json’ or ‘text’) If not provided, reads from LOG_FORMAT environment variable
log_file (str, optional) – Path to log file for file-based logging If not provided, reads from LOG_FILE environment variable
- Raises:
ValueError – If invalid log level or format type is provided
Example
>>> # Basic setup for development >>> setup_logging(level='DEBUG', format_type='text') >>> >>> # Production setup with file logging >>> setup_logging( ... level='INFO', ... format_type='json', ... log_file='/var/log/flowstate/app.log' ... )
Note
This function modifies the root logger configuration, affecting all loggers in the application unless they have propagate=False.
- backend.core.utils.logger.get_logger(name: str) Logger[source]¶
Get a logger instance for the specified module.
Returns a configured logger instance for the given name. Loggers are cached, so multiple calls with the same name return the same instance.
- Parameters:
name (str) – Logger name, typically __name__ of the calling module
- Returns:
Configured logger instance
- Return type:
logging.Logger
Example
>>> logger = get_logger(__name__) >>> logger.info("Module initialized") >>> logger.debug("Debug information", extra={'key': 'value'})
Note
Always use __name__ as the logger name to maintain a clear module hierarchy in log messages.
- class backend.core.utils.logger.log_context(**kwargs)[source]¶
Bases:
objectContext manager for adding context to all log messages within a scope.
Useful for adding request IDs, user IDs, or other contextual information that should be included in all log messages within a specific scope.
- Parameters:
**kwargs – Key-value pairs to add to log context
Example
>>> with log_context(request_id='abc-123', user_id=42): ... logger.info("Processing request") ... # Log will include request_id and user_id
Note
Context is thread-safe using ContextVar. Each async task or thread maintains its own context.