DeepThought.sh
Programming

Ultimate Guide to RESTful API Development in Go Part 2: Database Persistence Using PostgreSQL and Docker

Transform your Go REST API from in-memory storage to production-ready PostgreSQL persistence with Docker, connection pooling, and transaction management.

Aaron Mathis
34 min read
Ultimate Guide to RESTful API Development in Go Part 2: Database Persistence Using PostgreSQL and Docker

Welcome to Part 2 of our RESTful API development series. In Part 1, we built a functional Todo API using in-memory storage. While perfect for learning the fundamentals, production APIs require persistent data storage that survives server restarts and handles concurrent access safely.

In this tutorial, we’ll transform our API to use PostgreSQL as our database backend. You’ll learn how to set up PostgreSQL using Docker, implement proper database connections with connection pooling, manage transactions effectively, and structure your code for maintainability. By the end, you’ll have a production-ready API that handles data persistence professionally.

PostgreSQL is an excellent choice for Go applications due to its excellent Go driver support, ACID compliance, and powerful features like JSON columns and full-text search. Docker simplifies database setup and ensures consistent environments across development and production.

Setting Up PostgreSQL with Docker

Docker provides the fastest way to get PostgreSQL running locally without installing database software directly on your machine. This approach matches how many production environments handle database deployment and makes it easy to tear down and recreate your database during development.

First, create a docker-compose.yml file in your project root to define our PostgreSQL service:

version: '3.8'
services:
  postgres:
    image: postgres:15-alpine
    container_name: todo_postgres
    restart: unless-stopped
    environment:
      POSTGRES_DB: todoapi
      POSTGRES_USER: todouser
      POSTGRES_PASSWORD: todopass
      POSTGRES_HOST_AUTH_METHOD: scram-sha-256
    ports:
      - "5432:5432"
    volumes:
      - postgres_data:/var/lib/postgresql/data
      - ./sql:/docker-entrypoint-initdb.d
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U todouser -d todoapi"]
      interval: 10s
      timeout: 5s
      retries: 5
      start_period: 30s
    networks:
      - todo_network

  # Optional: pgAdmin for database management
  pgadmin:
    image: dpage/pgadmin4:latest
    container_name: todo_pgadmin
    restart: unless-stopped
    environment:
      PGADMIN_DEFAULT_EMAIL: [email protected]
      PGADMIN_DEFAULT_PASSWORD: admin
      PGADMIN_LISTEN_PORT: 80
    ports:
      - "8081:80"
    volumes:
      - pgadmin_data:/var/lib/pgadmin
    networks:
      - todo_network
    depends_on:
      postgres:
        condition: service_healthy

volumes:
  postgres_data:
    driver: local
  pgadmin_data:
    driver: local

networks:
  todo_network:
    driver: bridge

This configuration creates a PostgreSQL 15 container with Alpine Linux for a smaller image size. The healthcheck section ensures the database is ready before your application attempts to connect. The volumes section persists data between container restarts and automatically runs initialization scripts.

The environment variables set up a dedicated database user with appropriate permissions rather than using the default postgres superuser. This follows security best practices by limiting database privileges to only what your application needs.

Create the SQL initialization script to set up our table structure. Make a sql directory and create init.sql:

-- Database initialization script for Todo API
-- This script runs automatically when the PostgreSQL container starts

-- Create todos table with proper indexing and constraints
CREATE TABLE todos (
    id SERIAL PRIMARY KEY,
    title VARCHAR(255) NOT NULL CHECK (length(title) > 0),
    description TEXT,
    completed BOOLEAN NOT NULL DEFAULT FALSE,
    created_at TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT NOW(),
    updated_at TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT NOW()
);

-- Create index for common query patterns
CREATE INDEX idx_todos_completed ON todos(completed);
CREATE INDEX idx_todos_created_at ON todos(created_at);
CREATE INDEX idx_todos_title ON todos USING gin(to_tsvector('english', title));

-- Create a function to automatically update the updated_at timestamp
CREATE OR REPLACE FUNCTION update_updated_at_column()
RETURNS TRIGGER AS $$
BEGIN
    NEW.updated_at = NOW();
    RETURN NEW;
END;
$$ language 'plpgsql';

-- Create trigger to automatically update updated_at on row changes
CREATE TRIGGER update_todos_updated_at
    BEFORE UPDATE ON todos
    FOR EACH ROW
    EXECUTE FUNCTION update_updated_at_column();

-- Insert sample data for testing
INSERT INTO todos (title, description, completed) VALUES
    ('Learn Go basics', 'Study Go syntax, types, and basic programming concepts', false),
    ('Set up PostgreSQL', 'Configure PostgreSQL database with Docker for persistent storage', true),
    ('Implement REST API', 'Build RESTful endpoints for todo CRUD operations', false),
    ('Add authentication', 'Implement JWT-based authentication for API security', false),
    ('Deploy to production', 'Set up CI/CD pipeline and deploy to cloud provider', false);

-- Create a schema_versions table for migration tracking
CREATE TABLE schema_versions (
    version INTEGER PRIMARY KEY,
    description TEXT NOT NULL,
    applied_at TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT NOW()
);

-- Record the initial schema version
INSERT INTO schema_versions (version, description) VALUES
    (1, 'Initial schema with todos table and basic indexes');

For production environments, you’ll want to implement proper database migrations. Create a sql/migrations directory structure:

mkdir -p sql/migrations

Create migration files following a naming convention like 001_initial_schema.up.sql and 001_initial_schema.down.sql for forward and rollback migrations respectively.

The SERIAL type automatically generates unique IDs, while TIMESTAMP WITH TIME ZONE ensures proper time handling across different timezones. The CHECK constraint prevents empty titles, and indexes optimize common query patterns like filtering by completion status.

Start your PostgreSQL container:

docker-compose up -d postgres

The -d flag runs the container in the background. You can verify the database is running with:

docker-compose logs postgres

You should see PostgreSQL startup messages and confirmation that the database is ready to accept connections.

todo_postgres | PostgreSQL init process complete; ready for start up.
todo_postgres | 
todo_postgres | 2025-07-26 05:22:54.696 UTC [1] LOG:  starting PostgreSQL 15.13 on x86_64-pc-linux-musl, compiled by gcc (Alpine 14.2.0) 14.2.0, 64-bit
todo_postgres | 2025-07-26 05:22:54.696 UTC [1] LOG:  listening on IPv4 address "0.0.0.0", port 5432
todo_postgres | 2025-07-26 05:22:54.696 UTC [1] LOG:  listening on IPv6 address "::", port 5432
todo_postgres | 2025-07-26 05:22:54.697 UTC [1] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
todo_postgres | 2025-07-26 05:22:54.700 UTC [59] LOG:  database system was shut down at 2025-07-26 05:22:54 UTC
todo_postgres | 2025-07-26 05:22:54.704 UTC [1] LOG:  database system is ready to accept connections

Installing Database Dependencies

Go’s database ecosystem centers around the database/sql package, which provides a generic interface for SQL databases. We’ll use the popular pq driver for PostgreSQL connectivity and sqlx for enhanced query building and scanning.

go get github.com/lib/pq
go get github.com/jmoiron/sqlx

The pq driver implements Go’s database/sql interface specifically for PostgreSQL, providing features like prepared statements, connection pooling, and proper error handling. The sqlx library extends the standard library with convenient methods for scanning query results into structs and handling named parameters.

While there are newer drivers like pgx, pq remains widely used in production systems and provides excellent stability and performance for most applications.


Database Configuration and Connection Management

Proper database configuration is crucial for production applications. Create a new file internal/database/config.go to centralize database settings:

package database

import (
	"fmt"
	"os"
	"strconv"
	"time"
)

// Config holds the database configuration settings
type Config struct {
	Host            string
	Port            int
	User            string
	Password        string
	DBName          string
	SSLMode         string
	MaxOpenConns    int
	MaxIdleConns    int
	ConnMaxLifetime time.Duration
	ConnMaxIdleTime time.Duration
}

// LoadConfig loads the database configuration from environment variables
func LoadConfig() Config {
	return Config{
		Host:            getEnvWithDefault("DB_HOST", "localhost"),
		Port:            getEnvAsIntWithDefault("DB_PORT", 5432),
		User:            getEnvWithDefault("DB_USER", "todouser"),
		Password:        getEnvWithDefault("DB_PASSWORD", "todopass"),
		DBName:          getEnvWithDefault("DB_NAME", "todoapi"),
		SSLMode:         getEnvWithDefault("DB_SSLMODE", "disable"),
		MaxOpenConns:    getEnvAsIntWithDefault("DB_MAX_OPEN_CONNS", 25),
		MaxIdleConns:    getEnvAsIntWithDefault("DB_MAX_IDLE_CONNS", 5),
		ConnMaxLifetime: getEnvAsDurationWithDefault("DB_CONN_MAX_LIFETIME", 5*time.Minute),
		ConnMaxIdleTime: getEnvAsDurationWithDefault("DB_CONN_MAX_IDLE_TIME", 5*time.Minute),
	}
}

// ConnectionString returns the database connection string
func (c Config) ConnectionString() string {
	return fmt.Sprintf("host=%s port=%d user=%s password=%s dbname=%s sslmode=%s",
		c.Host, c.Port, c.User, c.Password, c.DBName, c.SSLMode)
}

// getEnvWithDefault retrieves an environment variable with a default value
func getEnvWithDefault(key, defaultValue string) string {
	if value := os.Getenv(key); value != "" {
		return value
	}
	return defaultValue
}

// getEnvAsIntWithDefault retrieves an environment variable as an integer with a default value
func getEnvAsIntWithDefault(key string, defaultValue int) int {
	if value := os.Getenv(key); value != "" {
		if intValue, err := strconv.Atoi(value); err == nil {
			return intValue
		}
	}
	return defaultValue
}

// getEnvAsDurationWithDefault retrieves an environment variable as a time.Duration with a default value
func getEnvAsDurationWithDefault(key string, defaultValue time.Duration) time.Duration {
	if value := os.Getenv(key); value != "" {
		if duration, err := time.ParseDuration(value); err == nil {
			return duration
		}
	}
	return defaultValue
}

This configuration approach allows easy customization for different environments (development, staging, production) without code changes. The connection pool settings are particularly important for production performance and resource management.


Implementing Database Connection with Connection Pooling

Connection pooling is a technique used to manage and reuse database connections efficiently. Instead of opening and closing a new connection for every database request, which is resource-intensive and slow, a pool of connections is maintained. When an application needs to interact with the database, it borrows a connection from the pool and returns it when done. This approach offers several benefits:

  • Performance: Reduces the latency associated with repeatedly establishing and tearing down connections.
  • Resource Management: Limits the number of open connections, preventing resource exhaustion on the database server.
  • Scalability: Supports higher throughput by allowing multiple requests to share a limited set of connections.
  • Reliability: Helps avoid connection leaks and improves error handling by centralizing connection management.

Most modern database drivers and ORMs (Object-Relational Mappers) provide built-in support for connection pooling. In Go, the standard database/sql package automatically manages a pool of connections for you.

You can configure parameters such as the maximum number of open connections, idle connections, and connection lifetimes to optimize performance for your application’s workload.

Understanding and tuning connection pooling is crucial for building scalable, high-performance APIs that interact with databases.

Create internal/database/database.go:

package database

import (
	"context"
	"database/sql"
	"fmt"
	"time"

	"github.com/jmoiron/sqlx"
	_ "github.com/lib/pq"
)

type DB struct {
	*sqlx.DB
}

// NewConnection creates a new database connection
func NewConnection(config Config) (*DB, error) {
	connectionString := config.ConnectionString()

	sqlxDB, err := sqlx.Connect("postgres", connectionString)
	if err != nil {
		return nil, fmt.Errorf("failed to connect to database: %w", err)
	}

	// Configure connection pool
	sqlxDB.SetMaxOpenConns(config.MaxOpenConns)
	sqlxDB.SetMaxIdleConns(config.MaxIdleConns)
	sqlxDB.SetConnMaxLifetime(config.ConnMaxLifetime)
	sqlxDB.SetConnMaxIdleTime(config.ConnMaxIdleTime)

	// Verify connection
	if err := sqlxDB.Ping(); err != nil {
		return nil, fmt.Errorf("failed to ping database: %w", err)
	}

	return &DB{sqlxDB}, nil
}

// Close closes the database connection
func (db *DB) Close() error {
	return db.DB.Close()
}

// HealthCheck verifies database connectivity
func (db *DB) HealthCheck() error {
	ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
	defer cancel()

	var result int
	err := db.GetContext(ctx, &result, "SELECT 1")
	if err != nil {
		return fmt.Errorf("database health check failed: %w", err)
	}

	return nil
}

// GetStats returns database connection pool statistics
func (db *DB) GetStats() sql.DBStats {
	return db.DB.Stats()
}

The SetMaxOpenConns function limits total connections to prevent overwhelming the database server. SetMaxIdleConns keeps connections ready for reuse, while SetConnMaxLifetime ensures connections are periodically refreshed to handle network issues and database restarts gracefully.

The Ping() method verifies the connection works before the application starts handling requests. This fail-fast approach catches configuration problems early rather than during the first API request.


Refactoring the Todo Model for Database Operations

Our original Todo model needs updates to work effectively with database operations. Update internal/models/todo.go:


// Todo represents a single todo item
type Todo struct {
	ID          int       `json:"id" db:"id"`
	Title       string    `json:"title" db:"title"`
	Description string    `json:"description" db:"description"`
	Completed   bool      `json:"completed" db:"completed"`
	CreatedAt   time.Time `json:"created_at" db:"created_at"`
	UpdatedAt   time.Time `json:"updated_at" db:"updated_at"`
}

// TodoRequest represents the expected JSON structure for creating/updating todos
type TodoRequest struct {
	Title       string `json:"title" binding:"required" validate:"min=1,max=255"`
	Description string `json:"description" validate:"max=1000"`
	Completed   bool   `json:"completed"`
}

And add TodoFilter:


// TodoFilter represents the expected query parameters for filtering todos
type TodoFilter struct {
	Completed *bool  `form:"completed"`
	Search    string `form:"search"`
	Limit     int    `form:"limit,default=10" validate:"min=1,max=100"`
	Offset    int    `form:"offset,default=0" validate:"min=0"`
}

The db struct tags tell sqlx how to map database columns to struct fields. This enables automatic scanning of query results into structs without manual field mapping. The TodoFilter struct supports common API patterns like filtering and pagination that become essential with persistent data.

The validation tags prepare for input validation that we’ll implement fully in Part 5. Setting reasonable limits on request parameters prevents abuse and helps with database performance.


Creating the Todo Repository Pattern

The repository pattern separates data access logic from business logic, making your code more testable and maintainable. To enhance testability, we’ll define a database interface that the repository can use, allowing us to inject mock implementations during unit testing.

First, create the database interface in internal/database/interfaces.go:

package database

import (
	"context"
	"database/sql"
)

// TodoDB defines the database operations needed by the TodoRepository
type TodoDB interface {
	Select(dest interface{}, query string, args ...interface{}) error
	Get(dest interface{}, query string, args ...interface{}) error
	Exec(query string, args ...interface{}) (sql.Result, error)
	SelectContext(ctx context.Context, dest interface{}, query string, args ...interface{}) error
	GetContext(ctx context.Context, dest interface{}, query string, args ...interface{}) error
	ExecContext(ctx context.Context, query string, args ...interface{}) (sql.Result, error)
	// Transaction support
	Beginx() (Tx, error)
}

Using an interface instead of a concrete *database.DB type provides several key benefits:

  • Testability: You can create mock implementations that don’t require a real database connection
  • Flexibility: The interface can be satisfied by different database implementations (testing, production, or even non-SQL stores)
  • Dependency Inversion: High-level modules (repositories) don’t depend on low-level modules (specific database implementations)

Now create internal/repository/todo_repository.go:

package repository

import (
	"database/sql"
	"fmt"
	"strings"

	"todo-api/internal/database"
	"todo-api/internal/models"
)

type TodoRepository struct {
	db database.TodoDB
}

// NewTodoRepository creates a new TodoRepository with the provided database interface
// Using an interface instead of a concrete type makes unit testing easier by allowing
// us to inject mock implementations without requiring a real database connection
func NewTodoRepository(db database.TodoDB) *TodoRepository {
	return &TodoRepository{db: db}
}

// GetAll retrieves all todo items with optional filtering.
func (r *TodoRepository) GetAll(filter models.TodoFilter) ([]*models.Todo, error) {
	query := "SELECT id, title, description, completed, created_at, updated_at FROM todos"
	args := []interface{}{}
	conditions := []string{}
	argIndex := 1

	if filter.Completed != nil {
		conditions = append(conditions, fmt.Sprintf("completed = $%d", argIndex))
		args = append(args, *filter.Completed)
		argIndex++
	}

	if filter.Search != "" {
		searchPattern := "%" + filter.Search + "%"
		conditions = append(conditions, fmt.Sprintf("(title ILIKE $%d OR description ILIKE $%d)", argIndex, argIndex))
		args = append(args, searchPattern)
		argIndex++
	}

	if len(conditions) > 0 {
		query += " WHERE " + strings.Join(conditions, " AND ")
	}

	query += " ORDER BY created_at DESC"
	query += fmt.Sprintf(" LIMIT $%d OFFSET $%d", argIndex, argIndex+1)
	args = append(args, filter.Limit, filter.Offset)

	todos := []*models.Todo{}
	err := r.db.Select(&todos, query, args...)
	if err != nil {
		return nil, fmt.Errorf("failed to get todos: %w", err)
	}

	return todos, nil
}

This method demonstrates several important patterns. The dynamic query building allows flexible filtering without SQL injection vulnerabilities by using parameterized queries. The ILIKE operator provides case-insensitive search across title and description fields.

The Select method from sqlx automatically scans multiple rows into a slice of structs, eliminating boilerplate code for iterating through sql.Rows. Error wrapping with fmt.Errorf preserves the original error while adding context about the operation that failed.

Add the remaining repository methods:


// GetByID retrieves a todo item by its ID.
func (r *TodoRepository) GetByID(id int) (*models.Todo, error) {
	todo := &models.Todo{}
	query := "SELECT id, title, description, completed, created_at, updated_at FROM todos WHERE id = $1"

	err := r.db.Get(todo, query, id)
	if err != nil {
		if err == sql.ErrNoRows {
			return nil, fmt.Errorf("todo with id %d not found", id)
		}
		return nil, fmt.Errorf("failed to get todo: %w", err)
	}

	return todo, nil
}

// Create creates a new todo item.
func (r *TodoRepository) Create(req models.TodoRequest) (*models.Todo, error) {
	todo := &models.Todo{}
	query := `
        INSERT INTO todos (title, description, completed) 
        VALUES ($1, $2, $3) 
        RETURNING id, title, description, completed, created_at, updated_at`

	err := r.db.Get(todo, query, req.Title, req.Description, req.Completed)
	if err != nil {
		return nil, fmt.Errorf("failed to create todo: %w", err)
	}

	return todo, nil
}

// Update updates an existing todo item.
func (r *TodoRepository) Update(id int, req models.TodoRequest) (*models.Todo, error) {
	todo := &models.Todo{}
	query := `
        UPDATE todos 
        SET title = $1, description = $2, completed = $3, updated_at = NOW() 
        WHERE id = $4 
        RETURNING id, title, description, completed, created_at, updated_at`

	err := r.db.Get(todo, query, req.Title, req.Description, req.Completed, id)
	if err != nil {
		if err == sql.ErrNoRows {
			return nil, fmt.Errorf("todo with id %d not found", id)
		}
		return nil, fmt.Errorf("failed to update todo: %w", err)
	}

	return todo, nil
}

// Delete removes a todo item by its ID.
func (r *TodoRepository) Delete(id int) error {
	query := "DELETE FROM todos WHERE id = $1"
	result, err := r.db.Exec(query, id)
	if err != nil {
		return fmt.Errorf("failed to delete todo: %w", err)
	}

	rowsAffected, err := result.RowsAffected()
	if err != nil {
		return fmt.Errorf("failed to get affected rows: %w", err)
	}

	if rowsAffected == 0 {
		return fmt.Errorf("todo with id %d not found", id)
	}

	return nil
}

The RETURNING clause in PostgreSQL allows retrieving updated data in the same query, eliminating the need for separate SELECT statements. This pattern is particularly powerful for maintaining data consistency and reducing database round trips.

Notice how sql.ErrNoRows gets special handling to distinguish between “not found” errors and actual database failures. This distinction helps API consumers understand whether they should retry the request or update their client state.


Understanding Database Transactions

Transactions ensure data consistency when multiple database operations must succeed or fail together. With our interface-based design, we need to extend our interfaces to support transactions while maintaining testability.

First, let’s extend our database interface to support transactions. Add the Tx interface to internal/database/interfaces.go:


// Tx defines transaction operations
type Tx interface {
	Select(dest interface{}, query string, args ...interface{}) error
	Get(dest interface{}, query string, args ...interface{}) error
	Exec(query string, args ...interface{}) (sql.Result, error)
	SelectContext(ctx context.Context, dest interface{}, query string, args ...interface{}) error
	GetContext(ctx context.Context, dest interface{}, query string, args ...interface{}) error
	ExecContext(ctx context.Context, query string, args ...interface{}) (sql.Result, error)
	Commit() error
	Rollback() error
}

Now create the transaction implementation in internal/database/transaction.go:

package database

import (
	"fmt"

	"github.com/jmoiron/sqlx"
)

// TxWrapper wraps sqlx.Tx to implement our Tx interface
type TxWrapper struct {
	*sqlx.Tx
}

// Beginx starts a new transaction
func (db *DB) Beginx() (Tx, error) {
	tx, err := db.DB.Beginx()
	if err != nil {
		return nil, fmt.Errorf("failed to begin transaction: %w", err)
	}
	return &TxWrapper{Tx: tx}, nil
}

// Commit commits the transaction
func (tx *TxWrapper) Commit() error {
	if err := tx.Tx.Commit(); err != nil {
		return fmt.Errorf("failed to commit transaction: %w", err)
	}
	return nil
}

// Rollback rolls back the transaction
func (tx *TxWrapper) Rollback() error {
	if err := tx.Tx.Rollback(); err != nil {
		return fmt.Errorf("failed to rollback transaction: %w", err)
	}
	return nil
}

This transaction pattern provides several benefits:

  • Atomicity: Either all operations succeed or all are rolled back
  • Consistency: Database constraints are enforced across the entire transaction
  • Isolation: Other concurrent operations don’t see partial results
  • Interface Compatibility: Transactions implement the same TodoDB interface, so existing repository methods work unchanged
  • Testability: Mock implementations can simulate transaction behavior

The defer pattern ensures that failed transactions are always rolled back, even if an unexpected error occurs. This prevents partial updates that could leave your database in an inconsistent state.

For complex business operations that span multiple repository methods, transactions ensure that either the entire operation completes successfully or the database remains unchanged.


Updating Handlers to Use the Database

Now we need to update our handlers to use the database repository instead of in-memory storage. Update internal/handlers/todos.go:

package handlers

import (
	"net/http"
	"strconv"
	"strings"

	"todo-api/internal/models"
	"todo-api/internal/repository"

	"github.com/gin-gonic/gin"
)

type TodoHandler struct {
	repo *repository.TodoRepository
}

func NewTodoHandler(repo *repository.TodoRepository) *TodoHandler {
	return &TodoHandler{
		repo: repo,
	}
}

func (h *TodoHandler) GetTodos(c *gin.Context) {
	var filter models.TodoFilter
	if err := c.ShouldBindQuery(&filter); err != nil {
		c.JSON(http.StatusBadRequest, gin.H{
			"error":   "Invalid query parameters",
			"details": err.Error(),
		})
		return
	}

	todos, err := h.repo.GetAll(filter)
	if err != nil {
		c.JSON(http.StatusInternalServerError, gin.H{
			"error": "Failed to retrieve todos",
		})
		return
	}

	c.JSON(http.StatusOK, gin.H{
		"todos":  todos,
		"count":  len(todos),
		"filter": filter,
	})
}

The ShouldBindQuery method automatically parses URL query parameters into our filter struct. This eliminates manual parameter parsing and provides automatic validation based on struct tags. Including the filter in the response helps API consumers understand how their request was interpreted.

Error handling becomes more nuanced with database operations. We distinguish between client errors (bad request parameters) and server errors (database failures) by returning appropriate HTTP status codes. In production, you’d log database errors for debugging while returning generic error messages to clients for security.

Update the remaining handler methods:


func (h *TodoHandler) GetTodo(c *gin.Context) {
	idParam := c.Param("id")
	id, err := strconv.Atoi(idParam)
	if err != nil {
		c.JSON(http.StatusBadRequest, gin.H{
			"error": "Invalid todo ID format",
		})
		return
	}

	todo, err := h.repo.GetByID(id)
	if err != nil {
		if strings.Contains(err.Error(), "not found") {
			c.JSON(http.StatusNotFound, gin.H{
				"error": "Todo not found",
			})
			return
		}
		c.JSON(http.StatusInternalServerError, gin.H{
			"error": "Failed to retrieve todo",
		})
		return
	}

	c.JSON(http.StatusOK, gin.H{
		"todo": todo,
	})
}

func (h *TodoHandler) CreateTodo(c *gin.Context) {
	var req models.TodoRequest
	if err := c.ShouldBindJSON(&req); err != nil {
		c.JSON(http.StatusBadRequest, gin.H{
			"error":   "Invalid request data",
			"details": err.Error(),
		})
		return
	}

	todo, err := h.repo.Create(req)
	if err != nil {
		c.JSON(http.StatusInternalServerError, gin.H{
			"error": "Failed to create todo",
		})
		return
	}

	c.JSON(http.StatusCreated, gin.H{
		"todo":    todo,
		"message": "Todo created successfully",
	})
}

func (h *TodoHandler) UpdateTodo(c *gin.Context) {
	idParam := c.Param("id")
	id, err := strconv.Atoi(idParam)
	if err != nil {
		c.JSON(http.StatusBadRequest, gin.H{
			"error": "Invalid todo ID format",
		})
		return
	}

	var req models.TodoRequest
	if err := c.ShouldBindJSON(&req); err != nil {
		c.JSON(http.StatusBadRequest, gin.H{
			"error":   "Invalid request data",
			"details": err.Error(),
		})
		return
	}

	todo, err := h.repo.Update(id, req)
	if err != nil {
		if strings.Contains(err.Error(), "not found") {
			c.JSON(http.StatusNotFound, gin.H{
				"error": "Todo not found",
			})
			return
		}
		c.JSON(http.StatusInternalServerError, gin.H{
			"error": "Failed to update todo",
		})
		return
	}

	c.JSON(http.StatusOK, gin.H{
		"todo":    todo,
		"message": "Todo updated successfully",
	})
}

func (h *TodoHandler) DeleteTodo(c *gin.Context) {
	idParam := c.Param("id")
	id, err := strconv.Atoi(idParam)
	if err != nil {
		c.JSON(http.StatusBadRequest, gin.H{
			"error": "Invalid todo ID format",
		})
		return
	}

	err = h.repo.Delete(id)
	if err != nil {
		if strings.Contains(err.Error(), "not found") {
			c.JSON(http.StatusNotFound, gin.H{
				"error": "Todo not found",
			})
			return
		}
		c.JSON(http.StatusInternalServerError, gin.H{
			"error": "Failed to delete todo",
		})
		return
	}

	c.JSON(http.StatusOK, gin.H{
		"message": "Todo deleted successfully",
	})
}

func (h *TodoHandler) GetStats(c *gin.Context) {
	// For now, return basic stats - this can be enhanced later
	c.JSON(http.StatusOK, gin.H{
		"message": "Stats endpoint - implementation pending",
		"status":  "success",
	})
}

These handlers now include proper error classification and meaningful error messages. The pattern of checking error strings isn’t ideal for production code, but it works for our current implementation. In Part 5, we’ll implement custom error types for more sophisticated error handling.


Environment Configuration

Production deployments require proper environment configuration. Create a .env.example file to document required environment variables:

# Database Configuration
DB_HOST=localhost
DB_PORT=5432
DB_USER=todouser
DB_PASSWORD=todopass
DB_NAME=todoapi
DB_SSLMODE=disable

# Connection Pool Settings
DB_MAX_OPEN_CONNS=25
DB_MAX_IDLE_CONNS=5
DB_CONN_MAX_LIFETIME=5m
DB_CONN_MAX_IDLE_TIME=5m

# Server Configuration
PORT=8080
GIN_MODE=release

# Logging
LOG_LEVEL=info
LOG_FORMAT=json

Create a .env file for local development (don’t commit this to version control):

cp .env.example .env

Update your application to load environment variables. Install the dotenv package:

go get github.com/joho/godotenv

Update cmd/server/main.go to load environment configuration and wire everything together:

package main

import (
    "log"
    "os"

    "todo-api/internal/database"
    "todo-api/internal/handlers"
    "todo-api/internal/repository"

    "github.com/gin-gonic/gin"
    "github.com/joho/godotenv"
)

func main() {
    // Load environment variables
    if err := godotenv.Load(); err != nil {
        log.Println("No .env file found, using system environment variables")
    }

    // Set Gin mode from environment
    if ginMode := os.Getenv("GIN_MODE"); ginMode != "" {
        gin.SetMode(ginMode)
    }

    // Load database configuration
    dbConfig := database.LoadConfig()
    
    // Connect to database
    db, err := database.NewConnection(dbConfig)
    if err != nil {
        log.Fatal("Failed to connect to database:", err)
    }
    defer db.Close()

    // Initialize repository and handler
    todoRepo := repository.NewTodoRepository(db)
    todoHandler := handlers.NewTodoHandler(todoRepo)

    // Setup router
    router := gin.Default()

    // Health check with database connectivity
    router.GET("/health", func(c *gin.Context) {
        if err := db.HealthCheck(); err != nil {
            c.JSON(500, gin.H{"status": "unhealthy", "error": err.Error()})
            return
        }

        stats := db.GetStats()
        c.JSON(200, gin.H{
            "status": "healthy",
            "database": gin.H{
                "connected": true,
                "open_connections": stats.OpenConnections,
                "in_use": stats.InUse,
                "idle": stats.Idle,
            },
        })
    })

    // API routes
    v1 := router.Group("/api/v1")
    {
        todos := v1.Group("/todos")
        {
            todos.GET("", todoHandler.GetTodos)
            todos.POST("", todoHandler.CreateTodo)
            todos.GET("/:id", todoHandler.GetTodo)
            todos.PUT("/:id", todoHandler.UpdateTodo)
            todos.DELETE("/:id", todoHandler.DeleteTodo)
            todos.GET("/stats", todoHandler.GetStats)
        }
    }

    // Start server
    port := os.Getenv("PORT")
    if port == "" {
        port = "8080"
    }

    log.Printf("Starting Todo API server on :%s", port)
    log.Printf("Database: %s@%s:%d/%s", dbConfig.User, dbConfig.Host, dbConfig.Port, dbConfig.DBName)
    log.Println("API endpoints available at http://localhost:" + port + "/api/v1/todos")
    
    if err := router.Run(":" + port); err != nil {
        log.Fatal("Failed to start server:", err)
    }
}

Testing Your Database-Backed API

With everything connected, let’s test the complete system. Start your containers and application:

# Start PostgreSQL
docker-compose up -d postgres

# Wait for database to be ready
docker-compose logs -f postgres

# Start the API server
go run cmd/server/main.go

Test the enhanced filtering capabilities:

Filter by completion status:

curl "http://localhost:8080/api/v1/todos?completed=false"

Search todos:

curl "http://localhost:8080/api/v1/todos?search=learn"

Pagination:

curl "http://localhost:8080/api/v1/todos?limit=2&offset=0"

Combined filters:

curl "http://localhost:8080/api/v1/todos?completed=false&search=go&limit=5"

Your API now persists data across restarts and supports sophisticated querying patterns essential for production applications.

Integration Testing with Test Database

Professional APIs require integration tests that verify database operations work correctly. Create a test database configuration by adding to your docker-compose.yml:

  postgres_test:
    image: postgres:15-alpine
    container_name: todo_postgres_test
    environment:
      POSTGRES_DB: todoapi_test
      POSTGRES_USER: todouser
      POSTGRES_PASSWORD: todopass
    ports:
      - "5433:5432"
    volumes:
      - ./sql:/docker-entrypoint-initdb.d
    tmpfs:
      - /var/lib/postgresql/data
    networks:
      - todo_network

Create integration tests in internal/repository/todo_repository_test.go:

package repository

import (
	"os"
	"testing"
	"time"

	"todo-api/internal/database"
	"todo-api/internal/models"

	"github.com/stretchr/testify/assert"
	"github.com/stretchr/testify/require"
)

var testDB *database.DB

func TestMain(m *testing.M) {
	// Setup test database
	config := database.Config{
		Host:            "localhost",
		Port:            5433,
		User:            "todouser",
		Password:        "todopass",
		DBName:          "todoapi_test",
		SSLMode:         "disable",
		MaxOpenConns:    5,
		MaxIdleConns:    2,
		ConnMaxLifetime: time.Minute,
		ConnMaxIdleTime: time.Minute,
	}

	var err error
	testDB, err = database.NewConnection(config)
	if err != nil {
		panic("Failed to connect to test database: " + err.Error())
	}

	// Run tests
	code := m.Run()

	// Cleanup
	testDB.Close()
	os.Exit(code)
}

func setupTestData(t *testing.T) *TodoRepository {
	// Clean the database
	_, err := testDB.Exec("TRUNCATE todos RESTART IDENTITY CASCADE")
	require.NoError(t, err)

	return NewTodoRepository(testDB)
}

func TestTodoRepository_Create(t *testing.T) {
	repo := setupTestData(t)

	req := models.TodoRequest{
		Title:       "Test Todo",
		Description: "Test Description",
		Completed:   false,
	}

	todo, err := repo.Create(req)

	assert.NoError(t, err)
	assert.NotZero(t, todo.ID)
	assert.Equal(t, req.Title, todo.Title)
	assert.Equal(t, req.Description, todo.Description)
	assert.Equal(t, req.Completed, todo.Completed)
	assert.NotZero(t, todo.CreatedAt)
	assert.NotZero(t, todo.UpdatedAt)
}

func TestTodoRepository_GetByID(t *testing.T) {
	repo := setupTestData(t)

	// Create a todo first
	req := models.TodoRequest{
		Title:       "Test Todo",
		Description: "Test Description",
		Completed:   false,
	}

	created, err := repo.Create(req)
	require.NoError(t, err)

	// Get the todo
	found, err := repo.GetByID(created.ID)

	assert.NoError(t, err)
	assert.Equal(t, created.ID, found.ID)
	assert.Equal(t, created.Title, found.Title)
}

func TestTodoRepository_GetAll_WithFilters(t *testing.T) {
	repo := setupTestData(t)

	// Create test data
	todos := []models.TodoRequest{
		{Title: "Learn Go", Description: "Study basics", Completed: false},
		{Title: "Build API", Description: "REST API", Completed: true},
		{Title: "Learn Python", Description: "Study advanced", Completed: false},
	}

	for _, todo := range todos {
		_, err := repo.Create(todo)
		require.NoError(t, err)
	}

	// Test filtering by completion status
	filter := models.TodoFilter{
		Completed: &[]bool{false}[0],
		Limit:     10,
		Offset:    0,
	}

	results, err := repo.GetAll(filter)
	assert.NoError(t, err)
	assert.Len(t, results, 2)

	// Test search functionality
	filter = models.TodoFilter{
		Search: "Learn",
		Limit:  10,
		Offset: 0,
	}

	results, err = repo.GetAll(filter)
	assert.NoError(t, err)
	assert.Len(t, results, 2)
}

func TestTodoRepository_Update(t *testing.T) {
	repo := setupTestData(t)

	// Create a todo
	req := models.TodoRequest{
		Title:       "Original Title",
		Description: "Original Description",
		Completed:   false,
	}

	created, err := repo.Create(req)
	require.NoError(t, err)

	// Update the todo
	updateReq := models.TodoRequest{
		Title:       "Updated Title",
		Description: "Updated Description",
		Completed:   true,
	}

	updated, err := repo.Update(created.ID, updateReq)

	assert.NoError(t, err)
	assert.Equal(t, created.ID, updated.ID)
	assert.Equal(t, updateReq.Title, updated.Title)
	assert.Equal(t, updateReq.Description, updated.Description)
	assert.Equal(t, updateReq.Completed, updated.Completed)
	assert.True(t, updated.UpdatedAt.After(created.UpdatedAt))
}

func TestTodoRepository_Delete(t *testing.T) {
	repo := setupTestData(t)

	// Create a todo
	req := models.TodoRequest{
		Title:       "To Delete",
		Description: "Will be deleted",
		Completed:   false,
	}

	created, err := repo.Create(req)
	require.NoError(t, err)

	// Delete the todo
	err = repo.Delete(created.ID)
	assert.NoError(t, err)

	// Verify it's deleted
	_, err = repo.GetByID(created.ID)
	assert.Error(t, err)
}

Run the integration tests:

# Start test database
docker-compose up -d postgres_test

# Run integration tests
go test ./internal/repository -v

# Run all tests
go test ./... -v

Unit Testing Handlers with Mock Database

While integration tests verify the complete database flow, unit tests allow you to test your handlers in isolation by mocking the database layer. This approach is faster and doesn’t require a running database.

First, create a mock implementation in internal/mocks/todo_db.go:

package mocks

import (
	"context"
	"database/sql"

	"todo-api/internal/database"
)

// MockResult implements sql.Result for testing
type MockResult struct {
	LastInsertIdValue int64
	RowsAffectedValue int64
}

func (m MockResult) LastInsertId() (int64, error) {
	return m.LastInsertIdValue, nil
}

func (m MockResult) RowsAffected() (int64, error) {
	return m.RowsAffectedValue, nil
}

// MockTx implements the Tx interface for testing transactions
type MockTx struct {
	*MockTodoDB
	CommitFunc   func() error
	RollbackFunc func() error
}

func (m *MockTx) Commit() error {
	if m.CommitFunc != nil {
		return m.CommitFunc()
	}
	return nil
}

func (m *MockTx) Rollback() error {
	if m.RollbackFunc != nil {
		return m.RollbackFunc()
	}
	return nil
}

// MockTodoDB is a mock implementation of the TodoDB interface for testing
type MockTodoDB struct {
	SelectFunc        func(dest interface{}, query string, args ...interface{}) error
	GetFunc           func(dest interface{}, query string, args ...interface{}) error
	ExecFunc          func(query string, args ...interface{}) (sql.Result, error)
	SelectContextFunc func(ctx context.Context, dest interface{}, query string, args ...interface{}) error
	GetContextFunc    func(ctx context.Context, dest interface{}, query string, args ...interface{}) error
	ExecContextFunc   func(ctx context.Context, query string, args ...interface{}) (sql.Result, error)
	BeginxFunc        func() (database.Tx, error)
}

func (m *MockTodoDB) Select(dest interface{}, query string, args ...interface{}) error {
	if m.SelectFunc != nil {
		return m.SelectFunc(dest, query, args...)
	}
	return nil
}

func (m *MockTodoDB) Get(dest interface{}, query string, args ...interface{}) error {
	if m.GetFunc != nil {
		return m.GetFunc(dest, query, args...)
	}
	return nil
}

func (m *MockTodoDB) Exec(query string, args ...interface{}) (sql.Result, error) {
	if m.ExecFunc != nil {
		return m.ExecFunc(query, args...)
	}
	return MockResult{LastInsertIdValue: 1, RowsAffectedValue: 1}, nil
}

func (m *MockTodoDB) SelectContext(ctx context.Context, dest interface{}, query string, args ...interface{}) error {
	if m.SelectContextFunc != nil {
		return m.SelectContextFunc(ctx, dest, query, args...)
	}
	return nil
}

func (m *MockTodoDB) GetContext(ctx context.Context, dest interface{}, query string, args ...interface{}) error {
	if m.GetContextFunc != nil {
		return m.GetContextFunc(ctx, dest, query, args...)
	}
	return nil
}

func (m *MockTodoDB) ExecContext(ctx context.Context, query string, args ...interface{}) (sql.Result, error) {
	if m.ExecContextFunc != nil {
		return m.ExecContextFunc(ctx, query, args...)
	}
	return MockResult{LastInsertIdValue: 1, RowsAffectedValue: 1}, nil
}

func (m *MockTodoDB) Beginx() (database.Tx, error) {
	if m.BeginxFunc != nil {
		return m.BeginxFunc()
	}

	// Return a mock transaction by default
	mockTx := &MockTx{
		MockTodoDB: &MockTodoDB{},
	}
	return mockTx, nil
}

Now create unit tests for your handlers in internal/handlers/todo_test.go:

package handlers

import (
	"bytes"
	"database/sql"
	"encoding/json"
	"net/http"
	"net/http/httptest"
	"testing"
	"time"

	"todo-api/internal/mocks"
	"todo-api/internal/models"
	"todo-api/internal/repository"

	"github.com/gin-gonic/gin"
	"github.com/stretchr/testify/assert"
	"github.com/stretchr/testify/require"
)

func TestTodoHandler_GetTodos(t *testing.T) {
	gin.SetMode(gin.TestMode)

	// Create mock database
	mockDB := &mocks.MockTodoDB{}

	// Set up expected behavior
	expectedTodos := []*models.Todo{
		{
			ID:          1,
			Title:       "Test Todo",
			Description: "Test Description",
			Completed:   false,
			CreatedAt:   time.Now(),
			UpdatedAt:   time.Now(),
		},
	}

	// Configure mock to return expected data
	mockDB.SelectFunc = func(dest interface{}, query string, args ...interface{}) error {
		// Type assert to the expected slice type
		if todos, ok := dest.(*[]*models.Todo); ok {
			*todos = expectedTodos
		}
		return nil
	}

	// Create repository with mock database
	repo := repository.NewTodoRepository(mockDB)
	handler := NewTodoHandler(repo)

	// Create test request
	router := gin.New()
	router.GET("/todos", handler.GetTodos)

	req, _ := http.NewRequest("GET", "/todos", nil)
	w := httptest.NewRecorder()

	// Execute request
	router.ServeHTTP(w, req)

	// Assert results
	assert.Equal(t, http.StatusOK, w.Code)

	var response map[string]interface{}
	err := json.Unmarshal(w.Body.Bytes(), &response)
	require.NoError(t, err)

	todos := response["todos"].([]interface{})
	assert.Len(t, todos, 1)
}

func TestTodoHandler_CreateTodo(t *testing.T) {
	gin.SetMode(gin.TestMode)

	// Create mock database
	mockDB := &mocks.MockTodoDB{}

	// Set up expected behavior for create operation
	expectedTodo := &models.Todo{
		ID:          1,
		Title:       "New Todo",
		Description: "New Description",
		Completed:   false,
		CreatedAt:   time.Now(),
		UpdatedAt:   time.Now(),
	}

	// Configure mock to return created todo
	mockDB.GetFunc = func(dest interface{}, query string, args ...interface{}) error {
		if todo, ok := dest.(*models.Todo); ok {
			*todo = *expectedTodo
		}
		return nil
	}

	// Create repository with mock database
	repo := repository.NewTodoRepository(mockDB)
	handler := NewTodoHandler(repo)

	// Create test request
	router := gin.New()
	router.POST("/todos", handler.CreateTodo)

	todoRequest := models.TodoRequest{
		Title:       "New Todo",
		Description: "New Description",
		Completed:   false,
	}

	jsonData, _ := json.Marshal(todoRequest)
	req, _ := http.NewRequest("POST", "/todos", bytes.NewBuffer(jsonData))
	req.Header.Set("Content-Type", "application/json")
	w := httptest.NewRecorder()

	// Execute request
	router.ServeHTTP(w, req)

	// Assert results
	assert.Equal(t, http.StatusCreated, w.Code)

	var response map[string]interface{}
	err := json.Unmarshal(w.Body.Bytes(), &response)
	require.NoError(t, err)

	todo := response["todo"].(map[string]interface{})
	assert.Equal(t, "New Todo", todo["title"])
	assert.Equal(t, "New Description", todo["description"])
	assert.Equal(t, false, todo["completed"])
}

func TestTodoHandler_GetTodo_NotFound(t *testing.T) {
	gin.SetMode(gin.TestMode)

	// Create mock database
	mockDB := &mocks.MockTodoDB{}

	// Configure mock to return "not found" error
	mockDB.GetFunc = func(dest interface{}, query string, args ...interface{}) error {
		return sql.ErrNoRows
	}

	// Create repository with mock database
	repo := repository.NewTodoRepository(mockDB)
	handler := NewTodoHandler(repo)

	// Create test request
	router := gin.New()
	router.GET("/todos/:id", handler.GetTodo)

	req, _ := http.NewRequest("GET", "/todos/999", nil)
	w := httptest.NewRecorder()

	// Execute request
	router.ServeHTTP(w, req)

	// Assert results
	assert.Equal(t, http.StatusNotFound, w.Code)
}

func TestTodoHandler_DeleteTodo(t *testing.T) {
	gin.SetMode(gin.TestMode)

	// Create mock database
	mockDB := &mocks.MockTodoDB{}

	// Configure mock to simulate successful deletion
	mockDB.ExecFunc = func(query string, args ...interface{}) (sql.Result, error) {
		return mocks.MockResult{RowsAffectedValue: 1}, nil
	}

	// Create repository with mock database
	repo := repository.NewTodoRepository(mockDB)
	handler := NewTodoHandler(repo)

	// Create test request
	router := gin.New()
	router.DELETE("/todos/:id", handler.DeleteTodo)

	req, _ := http.NewRequest("DELETE", "/todos/1", nil)
	w := httptest.NewRecorder()

	// Execute request
	router.ServeHTTP(w, req)

	// Assert results
	assert.Equal(t, http.StatusOK, w.Code)

	var response map[string]interface{}
	err := json.Unmarshal(w.Body.Bytes(), &response)
	require.NoError(t, err)

	assert.Equal(t, "Todo deleted successfully", response["message"])
}

These unit tests demonstrate several testing patterns:

  • Mocking Database Responses: Configure the mock to return specific data or errors
  • Type Assertions: Safely cast interface{} parameters to expected types
  • Error Scenarios: Test how handlers respond to database errors
  • HTTP Testing: Use httptest.NewRecorder() to capture HTTP responses
  • JSON Parsing: Verify response structure and content

Benefits of this testing approach:

  • Fast Execution: No database setup or teardown required
  • Isolated Testing: Each test focuses on handler logic, not database operations
  • Comprehensive Coverage: Easy to test error scenarios and edge cases
  • Parallel Execution: Tests can run in parallel without conflicting database state

Run the unit tests:

# Run only handler unit tests
go test ./internal/handlers -v

# Run with coverage
go test ./internal/handlers -v -cover

# Run all tests (unit + integration)
go test ./... -v

The combination of unit tests (fast, isolated) and integration tests (comprehensive, realistic) provides robust test coverage that catches both logic errors and integration issues.

Test the API

Now that we have run integration and unit testing, we can perform functional testing. After all, we need to confirm it does the thing it’s supposed to do, right?

Make sure your postgresql container is running:

docker ps

If it isn’t, start it:

docker-compose up -d postgres

# Verify it's ready
docker-compose logs postgres

Now, start the http server:

go run ./cmd/server/main.go

Now from another terminal, perform some tests. We want to make sure to perform various actions using our API:

# Get all todos
curl "http://localhost:8080/api/v1/todos"

# Filter by completion status
curl "http://localhost:8080/api/v1/todos?completed=false"

# Search by title/description
curl "http://localhost:8080/api/v1/todos?search=go"

# Pagination
curl "http://localhost:8080/api/v1/todos?limit=2&offset=0"

# Combined filters
curl "http://localhost:8080/api/v1/todos?completed=false&search=api&limit=5"

Create a new todo and verify it persists:

curl -X POST http://localhost:8080/api/v1/todos \
  -H "Content-Type: application/json" \
  -d '{"title":"Database Integration","description":"Successfully integrated PostgreSQL with our API","completed":true}'

Restart your API server and verify the data persists by fetching all todos again. This confirms that your data is properly stored in PostgreSQL rather than memory.


Production Considerations for Database Operations

Database integration introduces several considerations that become critical in production environments:

Connection Pool Tuning: Monitor your connection pool usage and adjust MaxOpenConns based on your database server capacity and application load. Too few connections create bottlenecks, while too many can overwhelm the database.

Query Performance: Use EXPLAIN ANALYZE in PostgreSQL to understand query performance. The indexes we created help with common filtering patterns, but you may need additional indexes as your data grows and query patterns evolve.

Database Migrations: As your schema evolves, you’ll need a migration system to update production databases safely. Consider tools like golang-migrate or goose for managing schema changes across environments.

Backup and Recovery: Implement regular database backups and test your recovery procedures. PostgreSQL provides excellent tools like pg_dump for logical backups and WAL archiving for point-in-time recovery.

Monitoring and Alerting: Monitor database performance metrics like connection count, query duration, and lock waits. Tools like pg_stat_statements provide valuable insights into query performance in production.

Security: Use connection encryption (SSL/TLS) in production, implement proper database user permissions, and consider using connection poolers like PgBouncer for additional security and performance benefits.


Advanced Database Patterns

As your application grows, consider these patterns for handling more complex scenarios:

Read Replicas: Use PostgreSQL read replicas to scale read operations by directing SELECT queries to replica servers while keeping writes on the primary server.

Connection Pooling with PgBouncer: External connection poolers provide more sophisticated connection management and can handle connection pooling across multiple application instances.

Prepared Statements: For frequently executed queries, prepared statements can improve performance by reducing query parsing overhead.

Bulk Operations: When processing large datasets, use batch operations with transactions to improve performance and maintain consistency.


Troubleshooting Common Issues

When working with database integration, you might encounter several common issues:

Connection Refused Errors: Verify your PostgreSQL container is running and accessible. Check that the port (5432) isn’t blocked by firewall rules.

Authentication Failures: Ensure your database credentials match those specified in the Docker Compose file. The connection string format is sensitive to special characters in passwords.

Query Performance: If queries become slow, check your indexes and consider using EXPLAIN to understand query execution plans.

Connection Pool Exhaustion: If you see “too many connections” errors, review your connection pool settings and ensure connections are being properly released.

Lock Timeouts: Long-running transactions can cause lock contention. Keep transactions short and consider using appropriate isolation levels for your use case.


What You’ve Accomplished

In Part 2, you’ve successfully transformed your Todo API from in-memory storage to a production-ready system with PostgreSQL persistence. Here’s what you’ve built:

Database Infrastructure:

  • Complete PostgreSQL setup with Docker and Docker Compose
  • Proper database schema with constraints, indexes, and triggers
  • Database migration strategy for schema versioning
  • pgAdmin integration for database management

Production-Ready Configuration:

  • Environment-based configuration management
  • Connection pooling with optimized settings
  • Health checks with database connectivity monitoring
  • Graceful shutdown handling

Repository Pattern Implementation:

  • Clean separation of data access logic with interface-based design
  • Context-aware database operations
  • Advanced filtering and pagination support
  • Comprehensive error handling with meaningful messages
  • Testable architecture using dependency injection

Testing Infrastructure:

  • Integration tests with dedicated test database
  • Unit tests using mock database implementations
  • Repository method testing with real database operations
  • Handler testing with isolated mock dependencies
  • Test data management and cleanup strategies
  • Performance monitoring and slow query detection
  • Comprehensive test coverage with both unit and integration tests

Performance Optimization:

  • Database indexes for common query patterns
  • Connection pool optimization
  • Query performance monitoring
  • Memory-efficient tmpfs for test database

What’s Next

In Part 2, we’ve successfully transformed our API from in-memory storage to a production-ready PostgreSQL backend. Your API now handles persistent data storage, supports filtering and pagination, and uses proper connection pooling for performance.

Your API now has a solid persistence layer that can handle production workloads. In Part 3, we’ll add the security and operational features that production APIs require:

  • Authentication & Authorization: JWT validation and API key authentication
  • Essential Middleware: Request ID tracking, structured logging, panic recovery
  • Security Features: Rate limiting, CORS, security headers, request size limits
  • Monitoring: Request metrics, error tracking, and health monitoring

The database foundation you’ve built will seamlessly integrate with these advanced features, providing the robust backend that modern applications demand.

The dependency injection pattern (passing the repository to the handler) makes your code more testable. You can easily create mock repositories for unit testing without requiring a real database connection.

Our database integration follows production patterns that will scale as your application grows. The repository pattern makes your code testable, while connection pooling ensures efficient resource usage under load.

Further Learning

To deepen your understanding of Go database programming and PostgreSQL administration, explore these resources:

The patterns and techniques you’ve learned in this tutorial provide a solid foundation for building scalable, maintainable APIs that handle data persistence professionally. Each concept builds upon established industry practices that you’ll encounter in production systems across different companies and projects.

Aaron Mathis

Aaron Mathis

Systems administrator and software engineer specializing in cloud development, AI/ML, and modern web technologies. Passionate about building scalable solutions and sharing knowledge with the developer community.

Related Articles

Discover more insights on similar topics