Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Thanks for the code and clarification. I'm surprised the TB team didn't pick it up, but your individual transfer test is a pretty poor representation. All you are testing there is how many batches you can complete per second, giving no time for the actual client to batch the transfers. This is because when you call createTransfer in GO, that will synchronously block.

For example, it is as if you created an HTTP server that only allows one concurrent request. Or having a queue where only 1 worker will ever do work. Is that your workload? Because I'm not sure I know of many workloads that are completely sync with only 1 worker.

To get a better representation for individual_transfers, I would use a waitgroup

  var wg sync.WaitGroup
  var mu sync.Mutex
  completedCount := 0

  for i := 0; i < len(transfers); i++ {
    wg.Add(1)
    go func(index int, transfer Transfer) {
     defer wg.Done()

     res, _ := client.CreateTransfers([]Transfer{transfer})
     for _, err := range res {
      if err.Result != 0 {
       log.Printf("Error creating transfer %d: %s", err.Index, err.Result)
      }
     }

     mu.Lock()
     completedCount++
     if completedCount%100 == 0 {
      fmt.Printf("%d\n", completedCount)
     }
     mu.Unlock()
    }(i, transfers[i])
   }

  wg.Wait()
  fmt.Printf("All %d transfers completed\n", len(transfers))
This will actually allow the client to batch the request internally and be more representative of the workloads you would get. Note, the above is not the same as doing the batching manually yourself. You could call createTransfer concurrently the client in multiple call sites. That would still auto batch them


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: