Skip to content

AarambhDevHub/micro_gan_patterns

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Micro GAN Pattern Generator

A lightweight GAN that generates abstract patterns on CPU in under 1 hour.

Features

Self-contained: No external downloads needed
CPU-optimized: Runs on any laptop (8GB+ RAM recommended)
Model persistence: Save/load checkpoints
Progress visualization: Images saved every 10 epochs
Modular: Easy to modify and extend
Venv-ready: Isolated environment
Command-line friendly: All parameters configurable

Training time: ~45-60 minutes on modern 4-core CPU
Disk space: ~50MB total (dataset + models + outputs)

Project Structure

micro_gan_patterns/
├── README.md
├── requirements.txt
├── generate_dataset.py
├── train.py
├── generate.py
├── src/
│   ├── __init__.py
│   ├── models.py
│   ├── dataset.py
│   └── utils.py
├── patterns/           # Dataset folder (auto-created)
├── checkpoints/        # Saved models (auto-created)
└── outputs/            # Generated images (auto-created)

Quick Start

1. Setup Environment

# Create and activate virtual environment
python -m venv venv
source venv/bin/activate  # Linux/Mac
# venv\Scripts\activate   # Windows

# Install dependencies
pip install -r requirements.txt

2. Generate Dataset

python generate_dataset.py --num_samples 1000 --image_size 64

3. Train Model

python train.py --epochs 100 --batch_size 16 --save_interval 10

Training takes ~45-60 minutes on CPU

4. Generate Patterns

# Generate 9 patterns from final model
python generate.py --checkpoint_path checkpoints/gan_epoch_100.pth --num_images 9

# Generate 16 patterns with custom filename
python generate.py --checkpoint_path checkpoints/gan_epoch_100.pth --num_images 16 --filename "more_patterns.png"

# Use earlier checkpoint
python generate.py --checkpoint_path checkpoints/gan_epoch_050.pth --num_images 25

Command-Line Options

Generate Dataset

python generate_dataset.py [OPTIONS]
  • --num_samples: Number of images to generate (default: 1000)
  • --save_dir: Output folder (default: "patterns")
  • --image_size: Image size in pixels (default: 64)

Train Model

python train.py [OPTIONS]
  • --data_dir: Dataset directory (default: "patterns")
  • --latent_dim: Latent dimension (default: 64)
  • --image_size: Image size (default: 64)
  • --batch_size: Batch size (default: 16)
  • --epochs: Number of epochs (default: 100)
  • --lr: Learning rate (default: 0.0002)
  • --save_interval: Save every N epochs (default: 10)

Generate Patterns

python generate.py [OPTIONS]
  • --checkpoint_path: Path to checkpoint file (required)
  • --latent_dim: Latent dimension (default: 64)
  • --num_images: Number of images to generate (default: 9)
  • --output_dir: Output directory (default: "outputs")
  • --filename: Output filename (default: "generated_patterns.png")

Customization

Modify Pattern Types

Edit generate_pattern_image() in generate_dataset.py:

pattern_type = random.choice(['circles', 'lines', 'gradient', 'noise'])

Add your own pattern generation logic.

Adjust Model Architecture

Edit src/models.py to change:

  • Layer sizes
  • Number of layers
  • Activation functions

Hyperparameter Tuning

Faster training (lower quality):

python train.py --epochs 50 --batch_size 32

Better quality (slower):

python train.py --epochs 200 --batch_size 8 --lr 0.0001

Reduce memory usage:

python train.py --batch_size 8

Troubleshooting

"RuntimeError: Parent directory does not exist"

Update line 101 in train.py from:

save_checkpoint(..., "checkpoints/final_model.pth")

to:

save_checkpoint(..., "checkpoints")

"Error loading state_dict: _orig_mod prefix"

Update load_checkpoint() in src/utils.py:

def load_checkpoint(checkpoint_path, generator, discriminator):
    checkpoint = torch.load(checkpoint_path, map_location='cpu')
    
    # Strip '_orig_mod.' prefix from torch.compile
    gen_state_dict = {k.replace('_orig_mod.', ''): v 
                      for k, v in checkpoint['generator_state_dict'].items()}
    disc_state_dict = {k.replace('_orig_mod.', ''): v 
                       for k, v in checkpoint['discriminator_state_dict'].items()}
    
    generator.load_state_dict(gen_state_dict)
    discriminator.load_state_dict(disc_state_dict)
    
    epoch = checkpoint['epoch']
    print(f"✓ Loaded checkpoint from epoch {epoch}")
    return epoch

Out of Memory

  • Reduce --batch_size to 8 or 4
  • Close other applications
  • Use fewer training images

Slow Training

  • Training is CPU-bound and will take time
  • Expected: ~3-4 seconds per epoch (100 epochs = ~6 minutes)
  • Close background applications
  • Ensure CPU isn't throttling (check temperatures)

Model Details

Generator: ~200K parameters

  • Input: 64-dimensional noise vector
  • Output: 64×64 RGB image
  • Architecture: FC → ConvTranspose layers

Discriminator: ~200K parameters

  • Input: 64×64 RGB image
  • Output: Real/fake probability
  • Architecture: Conv layers → FC

Use Cases

  • Tiled wallpaper backgrounds
  • Game sprite textures
  • Procedural texture generation
  • Abstract art elements
  • UI design backgrounds
  • Pattern research and experimentation

Requirements

See requirements.txt:

  • Python 3.8+
  • PyTorch 2.0+
  • torchvision
  • numpy
  • Pillow
  • matplotlib
  • tqdm
  • psutil

License

MIT License - Feel free to use and modify!

Credits

Built with PyTorch and designed for CPU-only training on laptops.


Happy pattern generation! 🎨

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Sponsor this project

  •  

Packages

No packages published

Languages