Skip to content

Latest commit

 

History

History
62 lines (43 loc) · 1.86 KB

README.md

File metadata and controls

62 lines (43 loc) · 1.86 KB

Go Concurrent Web Scraper

My journey of learning and implementing concurrent programming in Go through the development of a web scraper.

As I build this project, I'm exploring key concepts like goroutines, channels, and rate limiting, all while adhering to Test-Driven Development (TDD) principles.

🎯 Project Goals

  • Master Go concurrency patterns
  • Build a high-performance web scraper
  • Implement and understand rate limiting in concurrent systems
  • Practice Test-Driven Development
  • Document my learning process and discoveries

🛠️ Technologies Used

  • Go 1.16+
  • Standard library (net/http, sync, etc.)
  • Testing framework: built-in testing package

📘 What I'm Learning

  • Effective use of goroutines and channels
  • Concurrent design patterns in Go
  • Performance optimization techniques
  • Error handling in concurrent systems
  • TDD practices in Go

🗂️ Project Structure

.
├── main.go
└── main_test.go
├── go.mod
└── README.md

🚦 Getting Started

  1. Clone this repository
  2. Navigate to the project directory
  3. Run go test ./... to execute the tests
  4. Run go run main.go to start the scraper

📝 Notes and Observations

(I'll be updating this section as I progress through the project, documenting key learnings, challenges, and breakthroughs.)

📚 Resources I'm Using

🤔 Questions I'm Exploring

  • What's the optimal balance between concurrency and rate limiting?
  • How can I ensure my concurrent code is both efficient and maintainable?
  • What are the best practices for error handling in concurrent Go programs?

Feel free to explore the code and follow along with my learning journey!