I have a file of huge size for example 100MB, I need to chunk it into 4 25MB files using golang.
The thing here is, if i use go routine and read the file, the order of the data inside the files are not preserved. the code i used is
package main
import (
"bufio"
"fmt"
"log"
"os"
"sync"
"github.com/google/uuid"
)
func main() {
file, err := os.Open("sampletest.txt")
if err != nil {
log.Fatal(err)
}
defer file.Close()
lines := make(chan string)
// start four workers to do the heavy lifting
wc1 := startWorker(lines)
wc2 := startWorker(lines)
wc3 := startWorker(lines)
wc4 := startWorker(lines)
scanner := bufio.NewScanner(file)
go func() {
defer close(lines)
for scanner.Scan() {
lines <- scanner.Text()
}
if err := scanner.Err(); err != nil {
log.Fatal(err)
}
}()
writefiles(wc1, wc2, wc3, wc4)
}
func writefile(data string) {
file, err := os.Create("chunks/" + uuid.New().String() + ".txt")
if err != nil {
fmt.Println(err)
}
defer file.Close()
file.WriteString(data)
}
func startWorker(lines <-chan string) <-chan string {
finished := make(chan string)
go func() {
defer close(finished)
for line := range lines {
finished <- line
}
}()
return finished
}
func writefiles(cs ...<-chan string) {
var wg sync.WaitGroup
output := func(c <-chan string) {
var d string
for n := range c {
d += n
d += "\n"
}
writefile(d)
wg.Done()
}
wg.Add(len(cs))
for _, c := range cs {
go output(c)
}
go func() {
wg.Wait()
}()
}
Here using this code my file got split into 4 equal files, but the order in it is not preserved. I am very new to golang, any suggestions are highly appreciated.
I took this code from some site and tweaked here and there to meet my requirements.