Go’s concurrency model is one of its outstanding features. Although channels often become the focus of goroutine communication, understanding the core of concurrency without relying on channels is critical to mastering Go. This article takes a deep dive into goroutines, sync
Packages and practical synchronization modes.
Parallel vs Parallel
Concurrency refers to managing multiple tasks at the same time, while parallelism refers to executing multiple tasks at the same time. Go is designed for concurrency, so it’s easy to create programs that can handle multiple operations independently.
Goroutines: building blocks
one coroutine Is a lightweight thread managed by the Go runtime. Creating a goroutine is as simple as prefixing a function call with go
.
package main
import (
"fmt"
"time"
)
func task(name string) {
for i := 0; i < 5; i++ {
fmt.Printf("Task %s is running: %d\n", name, i)
time.Sleep(time.Millisecond * 500)
}
}
func main() {
go task("A")
go task("B")
// Let the main function wait for goroutines to finish
time.Sleep(time.Second * 3)
fmt.Println("Main function exiting")
}
No channel synchronization
use sync.WaitGroup
sync.WaitGroup
Is a powerful tool for waiting for multiple goroutines to complete their work.
package main
import (
"fmt"
"sync"
"time"
)
func worker(id int, wg *sync.WaitGroup) {
defer wg.Done() // Decrement the counter when the goroutine completes
fmt.Printf("Worker %d starting\n", id)
time.Sleep(time.Second)
fmt.Printf("Worker %d done\n", id)
}
func main() {
var wg sync.WaitGroup
for i := 1; i <= 3; i++ {
wg.Add(1)
go worker(i, &wg)
}
wg.Wait() // Wait for all goroutines to finish
fmt.Println("All workers completed")
}
use sync.Mutex
When multiple goroutines access shared data, race conditions may occur. one sync.Mutex
Ensure that only one goroutine has access to critical parts of the code at a time.
package main
import (
"fmt"
"sync"
)
type Counter struct {
value int
mu sync.Mutex
}
func (c *Counter) Increment() {
c.mu.Lock()
c.value++
c.mu.Unlock()
}
func main() {
counter := &Counter{}
var wg sync.WaitGroup
for i := 0; i < 10; i++ {
wg.Add(1)
go func() {
defer wg.Done()
counter.Increment()
}()
}
wg.Wait()
fmt.Printf("Final Counter Value: %d\n", counter.value)
}
Practical pattern
Workpool without channels
A worker pool is a pattern where multiple workers perform tasks simultaneously. Workers can access tasks from shared slices protected by mutexes, but not channels.
package main
import (
"fmt"
"sync"
)
func worker(id int, tasks *[]int, mu *sync.Mutex, wg *sync.WaitGroup) {
defer wg.Done()
for {
mu.Lock()
if len(*tasks) == 0 {
mu.Unlock()
return
}
task := (*tasks)[0]
*tasks = (*tasks)[1:]
mu.Unlock()
fmt.Printf("Worker %d processing task %d\n", id, task)
}
}
func main() {
tasks := []int{1, 2, 3, 4, 5}
var mu sync.Mutex
var wg sync.WaitGroup
for i := 1; i <= 3; i++ {
wg.Add(1)
go worker(i, &tasks, &mu, &wg)
}
wg.Wait()
fmt.Println("All tasks processed")
}
Concurrency trap
-
Competition conditions: When multiple goroutines access and modify shared data, race conditions may lead to unpredictable behavior. Similar tools
sync.Mutex
andsync/atomic
Help alleviate this situation. - deadlock: Occurs when goroutines wait indefinitely for resources locked by each other. Careful planning is crucial to avoid deadlock.
- hunger: If other Goroutines occupy resources, the Goroutine may block indefinitely.
in conclusion
Even without channels, concurrency in Go is very powerful. By mastering goroutine, sync
Packages and common patterns allow you to write high-performance, high-performance programs. Try these tools and you’ll soon see why Go is the first choice for concurrent programming!