-
Notifications
You must be signed in to change notification settings - Fork 51
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Running multi-threaded producer gets into Segmentation Fault #9
Comments
I tried serializing the for loop above to verify my code. It runs fine then. It is able to send >1000000/thread when I am using 4 threads. Looks like a thread safety issue in the libkafka library to me. Please share your opinion. Any help would be appreciated! Thanks, |
Hi Shikha, How's it going? I'm glad to hear you are finding libkafka useful. I haven't -DT On Thu, Jul 31, 2014 at 2:34 PM, ShikhaSrivastava [email protected]
|
Hi David, Thanks for writing back. I have a test set ready that you can add to the library for thread safety. For now, I switched my application to multi process application to achieve the goal of the problem statement. -Shikha |
Hi David, I tried creating a pull request to include the changes for dynamic buffer allocation to the repo. But somehow I am not able to create a new branch to commit my code to. Can you please help? Thanks, |
Hi,
I have been trying to run the modified version of SimpleProducer.cc (as per my requirements). When I am trying to run with 2 or more threads and sending 100,000 messages/thread or more, it is throwing segmentation fault when deleting a request (happens at different times and random). This error is not there while running a single thread.
The code where is throws error is -
On taking a backtrace in gdb, I see that it throws error in-
~ProduceRequest() in ProduceRequest.cc class of libkafka package.
Since, the behavior is so random, I am not sure what is wrong. Has anyone tried it before?
Thanks,
Shikha
The text was updated successfully, but these errors were encountered: