-
Notifications
You must be signed in to change notification settings - Fork 51
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issues with message > 999 bytes? #4
Comments
After I increased the DEFAULT_BUFFER_SIZE fields in Connection.h and Packet.h the problem got fixed. However, shouldn't this be dynamically allocated??? |
Greetings, That buffer is statically allocated as an efficiency mechanism based on Related to this, an upcoming update to libkafka will allow this buffer size Best of luck. -DT On Thu, Dec 19, 2013 at 12:31 PM, winbatch [email protected] wrote:
|
Yes, that would be better, especially since messages I might send could On Fri, Dec 20, 2013 at 1:45 PM, David Tompkins [email protected]:
|
Hi, I have been working on libkafka for a while now and I discovered this issue and saw that you mentioned that it will be there in the next release. I implemented a dynamic buffer allocation solution to this where the buffer size is passed as an argument to the ProduceRequest api call. And the calculation of the buffer size takes into consideration the size of each message in the messageArray. I'd be happy to share the code with the community so that it can be improved further and can be tested to its extremes. Thanks, |
See issue adobe-research#4
Using the producer sample, the code barfs if the message I send is 1000 bytes or larger. I noticed in the code lots of hard coded 1024 values. Is this the cause?
The text was updated successfully, but these errors were encountered: