Step 1: Understanding the Error
Python's built-in logging module works fine for simple scripts, but in production environments, you'll run into structural problems that force developers into workarounds. The standard logging module has a steep setup curve, inconsistent formatting across modules, poor integration with asynchronous code, and difficult-to-manage handlers and formatters.
The most common issue looks like this: Your application logs are scattered, inconsistent in format, missing context about which function or thread generated them, and impossible to route to different outputs without creating a tangled configuration. This is where Loguru solves problems before they start.
Let's reproduce the core problem with standard logging:
import logging
# Problem: Standard logging setup is verbose and fragile
logger = logging.getLogger(__name__)
handler = logging.FileHandler('app.log')
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.DEBUG)
# Problem: Missing context about execution source
logger.info("User action completed")
# Problem: Multiple modules create duplicate setup code
# You have to repeat this in every file
When you run this code, you'll get:
$ python script.py
2024-04-14 12:34:56,123 - __main__ - INFO - User action completed
This looks fine in isolation. But now scale it: multiple files, different handlers, rotating logs, structured output, performance monitoring. Your codebase explodes with boilerplate, handlers step on each other, and adding a new logging destination means touching half your modules.
Step 2: Identifying the Cause
The standard logging module's design assumes centralized configuration. Everything requires manual setup: creating loggers, adding handlers, setting formatters, managing levels. Here are the specific pain points:
Problem 1: Boilerplate Everywhere Every module that needs logging repeats the same setup code. There's no sensible default.
Problem 2: Inconsistent Format Across Modules Different files might use different formats because each logger was configured separately or not at all. You lose context when debugging.
Problem 3: Poor Async Support Standard logging wasn't built for modern async Python. Coordinating logs from concurrent operations becomes messy.
Problem 4: Difficult Handler Routing Sending different log levels to different destinations (INFO to stdout, ERROR to file, CRITICAL to alerting system) requires complex configuration or custom handler classes.
Problem 5: Missing Context Injection Adding request IDs, user IDs, or correlation IDs to every log line requires context variables and custom formatters—or you manually add them to every message.
Loguru solves all of these by making sane defaults the starting point and removing the configuration tax.
Step 3: Implementing the Solution
Solution Part 1: Basic Loguru Setup
Install Loguru first:
$ pip install loguru
Now replace your standard logging with this:
from loguru import logger
# That's it. Loguru already logs to stderr with a sensible format
logger.info("User action completed")
logger.debug("Debugging information")
logger.error("An error occurred")
Output:
2024-04-14 12:34:56.123 | INFO | __main__:5 - User action completed
2024-04-14 12:34:56.124 | DEBUG | __main__:6 - Debugging information
2024-04-14 12:34:56.125 | ERROR | __main__:7 - An error occurred
Notice the improvements: timestamp, level, file location with line number, function name. All automatic. No setup needed.
Solution Part 2: Removing Default Handler and Adding File Output
Loguru logs to stderr by default. For production, you'll want to remove that and add structured file output:
import sys
from loguru import logger
# Remove default stderr handler
logger.remove()
# Add file handler with rotation
logger.add(
"logs/app.log",
format="{time:YYYY-MM-DD HH:mm:ss} | {level: <8} | {name}:{function}:{line} - {message}",
level="INFO",
rotation="500 MB", # Rotate when file reaches 500 MB
retention="7 days" # Keep logs for 7 days
)
# Keep console output for development
if __name__ == "__main__":
logger.add(sys.stdout, format="{time:HH:mm:ss} | {level} | {message}", level="DEBUG")
logger.info("Application started")
logger.debug("This only shows in development console")
The rotation parameter prevents log files from consuming all disk space. The retention parameter auto-deletes old logs. This would require complex custom code in standard logging.
Solution Part 3: Context Injection with bind()
This is where Loguru shines compared to standard logging. Add contextual data to every log line in a scope without repeating it:
from loguru import logger
# Inject user_id and request_id into all logs within this context
request_id = "req-12345"
user_id = "user-789"
logger_with_context = logger.bind(request_id=request_id, user_id=user_id)
logger_with_context.info("Processing user request")
logger_with_context.error("Failed to save data")
# Output:
# 2024-04-14 12:34:56.123 | INFO | __main__:11 - Processing user request
# request_id=req-12345 user_id=user-789
Wait—that output format doesn't include the bound values. You need to update the format string:
from loguru import logger
import sys
logger.remove()
logger.add(
sys.stdout,
format="{time:HH:mm:ss} | {level} | {message} | extra={extra}",
level="INFO"
)
request_id = "req-12345"
user_id = "user-789"
logger_with_context = logger.bind(request_id=request_id, user_id=user_id)
logger_with_context.info("Processing user request")
# Output:
# 12:34:56 | INFO | Processing user request | extra={'request_id': 'req-12345', 'user_id': 'user-789'}
Much better. In real applications, parse extra into your logging aggregation tool (Datadog, ELK, etc.) for structured querying.
Solution Part 4: Async-Safe Logging
Standard logging has threading issues with async code. Loguru handles this automatically:
import asyncio
from loguru import logger
async def process_job(job_id):
logger.info(f"Starting job {job_id}")
await asyncio.sleep(1)
logger.info(f"Completed job {job_id}")
async def main():
tasks = [process_job(i) for i in range(5)]
await asyncio.gather(*tasks)
if __name__ == "__main__":
asyncio.run(main())
# Output (all logs properly interleaved without corruption):
# 2024-04-14 12:34:56.123 | INFO | __main__:7 - Starting job 0
# 2024-04-14 12:34:56.124 | INFO | __main__:7 - Starting job 1
# 2024-04-14 12:34:56.125 | INFO | __main__:7 - Starting job 2
# ...
Loguru automatically serializes log writes, so concurrent operations don't produce garbled output. Standard logging requires you to use QueueHandler and QueueListener for this.
Solution Part 5: Structured Logging (JSON Output)
For production environments with log aggregation systems, output structured JSON:
import json
import sys
from loguru import logger
def json_formatter(record):
"""Custom formatter that outputs JSON"""
log_data = {
"timestamp": record["time"].isoformat(),
"level": record["level"].name,
"message": record["message"],
"module": record["name"],
"function": record["function"],
"line": record["line"],
}
# Add any extra context
if record["extra"]:
log_data.update(record["extra"])
return json.dumps(log_data) + "\n"
logger.remove()
logger.add(sys.stdout, format=json_formatter)
logger_ctx = logger.bind(request_id="req-001", user_id="usr-42")
logger_ctx.info("User logged in")
logger_ctx.error("Authentication failed")
# Output:
# {"timestamp": "2024-04-14T12:34:56.123456", "level": "INFO", "message": "User logged in", "module": "__main__", "function": "<module>", "line": 18, "request_id": "req-001", "user_id": "usr-42"}
# {"timestamp": "2024-04-14T12:34:56.124567", "level": "ERROR", "message": "Authentication failed", "module": "__main__", "function": "<module>", "line": 19, "request_id": "req-001", "user_id": "usr-42"}
This JSON output integrates seamlessly with Datadog, ELK, CloudWatch, or any log aggregation tool.
Solution Part 6: Exception Logging with Full Traceback
Loguru's exception handling is significantly better than standard logging:
from loguru import logger
def divide(a, b):
return a / b
try:
result = divide(10, 0)
except Exception as e:
# Standard logging.exception() loses context
# Loguru captures the full traceback automatically
logger.exception("Division operation failed")
# Output includes the full traceback with syntax highlighting in terminal:
# 2024-04-14 12:34:56.123 | ERROR | __main__:10 - Division operation failed
# Traceback (most recent call last):
# File "script.py", line 8, in <module>
# result = divide(10, 0)
# File "script.py", line 6, in divide
# return a / b
# ZeroDivisionError: division by zero
Compare this to standard logging where you'd need to explicitly enable traceback in the format string and handle it inconsistently.
Step 4: Complete Working Example
Here's a production-ready example combining all concepts:
import sys
import json
import asyncio
from pathlib import Path
from loguru import logger
# Configure Loguru for production
def setup_logging(env: str = "development"):
"""Initialize Loguru with environment-specific configuration"""
# Remove default handler
logger.remove()
# Create logs directory if it doesn't exist
log_dir = Path("logs")
log_dir.mkdir(exist_ok=True)
# Define format based on environment
if env == "production":
# JSON format for log aggregation
def json_format(record):
log_data = {
"timestamp": record["time"].isoformat(),
"level": record["level"].name,
"message": record["message"],
"module": record["name"],
"function": record["function"],
"line": record["line"],
}
log_data.update(record["extra"])
return json.dumps(log_data)
# Log to file only in production
logger.add(
log_dir / "app.log",
format=json_format,
level="INFO",
rotation="500 MB",
retention="30 days",
serialize=False # We're handling serialization manually
)
# Also log errors to separate file
logger.add(
log_dir / "errors.log",
format=json_format,
level="ERROR",
rotation="100 MB",
retention="90 days"
)
else:
# Human-readable format for development
logger.add(
sys.stdout,
format="<green>{time:HH:mm:ss}</green> | <level>{level: <8}</level> | <cyan>{name}:{function}:{line}</cyan> - <level>{message}</level>",
level="DEBUG",
colorize=True
)
# Also write to file for reference
logger.add(
log_dir / "dev.log",
format="{time:YYYY-MM-DD HH:mm:ss} | {level: <8} | {name}:{function}:{line} - {message}",
level="DEBUG"
)
# Initialize logging
setup_logging(env="development")
class DataProcessor:
def __init__(self, processor_id: str):
self.processor_id = processor_id
self.logger = logger.bind(processor_id=processor_id)
async def process_item(self, item_id: str):
"""Simulate async work with logging"""
self.logger.info(f"Starting to process item {item_id}")
try:
# Simulate work
await asyncio.sleep(0.1)
if item_id == "error_item":
raise ValueError(f"Invalid item: {item_id}")
self.logger.debug(f"Item {item_id} processed successfully")
return {"status": "success", "item_id": item_id}
except ValueError as e:
self.logger.exception(f"Failed to process item {item_id}")
return {"status": "error", "item_id": item_id}
async def main():
"""Main execution with multiple concurrent processors"""
logger.info("Application started")
processor1 = DataProcessor("proc-001")
processor2 = DataProcessor("proc-002")
# Simulate concurrent processing
items = ["item-1", "item-2", "error_item", "item-3"]
tasks = []
for i, item in enumerate(items):
processor = processor1 if i % 2 == 0 else processor2
tasks.append(processor.process_item(item))
results = await asyncio.gather(*tasks)
logger.info(f"Processing complete. Results: {results}")
logger.info("Application shutting down")
if __name__ == "__main__":
asyncio.run(main())
Run this:
$ python script.py
Expected output (development mode):
12:34:56 | INFO | __main__:51 - Application started
12:34:56 | INFO | __main__:29 - Starting to process item item-1 | extra={'processor_id': 'proc-001'}
12:34:56 | INFO | __main__:29 - Starting to process item item-2 | extra={'processor_id': 'proc-002'}
12:34:56 | INFO | __main__:29 - Starting to process item error_item | extra={'processor_id': 'proc-001'}
12:34:56 | ERROR | __main__:33 - Failed to process item error_item
Traceback (most recent call last):
File "script.py", line 32, in process_item
raise ValueError(f"Invalid item: {item_id}")
ValueError: Invalid item: error_item
12:34:56 | DEBUG | __main__:35 - Item item-1 processed successfully | extra={'processor_id': 'proc-001'}
12:34:56 | INFO | __main__:52 - Processing complete. Results: [{'status': 'success', 'item_id': 'item-1'}, {'status': 'success', 'item_id': 'item-2'}, {'status': 'error', 'item_id': 'error_item'}, {'status': 'success', 'item_id': 'item-3'}]
12:34:56 | INFO | __main__:53 - Application shutting down
Log files are also created in logs/app.log and logs/dev.log for reference.
Step 5: Common Issues and Troubleshooting
Issue: Logs not appearing where expected
Check that you've called logger.remove() to clear default handlers. Without this, Loguru still outputs to stderr and your file handler is just an addition:
from loguru import logger
logger.remove() # Essential—removes default stderr handler
logger.add("app.log")
Issue: File permission errors on Windows
Windows locks open log files. Use the serialize parameter to force single-threaded writes:
logger.add("app.log", serialize=True)
Issue: Log rotation not working
The rotation condition must match exactly. Common mistakes:
# Wrong—creates new file every time it's called
logger.add("app.log", rotation="1 day")
logger.add("app.log", rotation="1 day") # This creates a second handler
# Correct—remove() first, then add once during setup
logger.remove()
logger.add("app.log", rotation="1 day")
Issue: Memory usage increasing over time
If you're binding context frequently without clearing it, the logger's internal state can grow. Use a new bound logger instance instead of repeatedly binding to the same logger:
# Less optimal—repeated bindings accumulate
for request_id in request_ids:
logger.bind(request_id=request_id).info("Processing")
# Better—use function-level logger instance
def handle_request(request_id):
request_logger = logger.bind(request_id=request_id)
request_logger.info("Processing")
Issue: Multiprocess logging collisions
If multiple processes write to the same log file, use serialize=True:
logger.add("app.log", serialize=True)
This forces writes through a lock, but with a minor performance cost. For truly high-throughput multiprocess systems, write to stdout and let the operating system aggregate (or use a logging daemon).
Step 6: Performance Considerations
Loguru is faster than standard logging for typical use cases, but consider these points:
Filtering is cheaper than formatting: If you have expensive custom formatters, move logic into filters instead:
# Expensive—runs for every log call
logger.add(sys.stdout, format="{message} {extra[computed]}")
# Cheaper—runs only if log level matches
def expensive_filter(record):
record["extra"]["computed"] = expensive_computation()
return True
logger.add(sys.stdout, format="{message} {extra[computed]}", filter=expensive_filter)
Disable color in production: Color codes add overhead:
# Development—colorize=True is fine
logger.add(sys.stdout, colorize=True)
# Production—disable color
logger.add("app.log", colorize=False)
Use serialize=True carefully: It's necessary for multiprocess safety but adds overhead. Profile your application if throughput is critical.
Related Errors and Solutions
AttributeError: module 'loguru' has no attribute 'name'
You're trying to access loguru directly instead of importing logger:
# Wrong
import loguru
loguru.info("message")
# Correct
from loguru import logger
logger.info("message")
TypeError: 'NoneType' object is not iterable
This happens when you forget to import logger in a module but try to use it. Check that every file has:
from loguru import logger
The logger.exception() method is logging but traceback is missing
Make sure the log level is set to capture exceptions. Exception logs are always at ERROR level:
logger.remove()
logger.add(sys.stdout, level="DEBUG") # Ensure level is set
Logs in production are too verbose
Adjust the level per handler, not globally:
logger.remove()
# Only INFO and above to file
logger.add("app.log", level="INFO")
# DEBUG to stderr only in development
if debug_mode:
logger.add(sys.stderr, level="DEBUG")
Advanced: Custom Serialization for Log Aggregation
For Datadog or similar services, customize the JSON structure to match their expectations:
import json
import sys
from loguru import logger
from datetime import datetime
def datadog_formatter(record):
"""Format logs for Datadog ingestion"""
# Datadog expects specific field names
log_entry = {
"timestamp": record["time"].timestamp() * 1000, # Milliseconds
"level": record["level"].name,
"message": record["message"],
"logger.name": record["name"],
"logger.method_name": record["function"],
"logger.line_number": record["line"],
"hostname": "my-app-01", # Add hostname for log grouping
"service": "user-service", # Service name
"version": "1.0.0", # App version
}
# Flatten extra fields at top level
log_entry.update(record["extra"])
return json.dumps(log_entry)
logger.remove()
logger.add(sys.stdout, format=datadog_formatter)
# Use it
user_logger = logger.bind(user_id="123", request_id="abc")
user_logger.info("User action performed")
This produces output that Datadog or similar tools can parse with zero additional configuration.
Loguru transforms Python logging from a tedious configuration burden into a productive tool. The transition from standard logging to Loguru takes minutes but eliminates hours of debugging and configuration headaches. Start with the basic setup, layer in file handlers and rotation, then add context binding as your application scales.