Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issues with message > 999 bytes? #4

Open
winbatch opened this issue Dec 19, 2013 · 4 comments
Open

Issues with message > 999 bytes? #4

winbatch opened this issue Dec 19, 2013 · 4 comments

Comments

@winbatch
Copy link

Using the producer sample, the code barfs if the message I send is 1000 bytes or larger. I noticed in the code lots of hard coded 1024 values. Is this the cause?

@winbatch
Copy link
Author

After I increased the DEFAULT_BUFFER_SIZE fields in Connection.h and Packet.h the problem got fixed. However, shouldn't this be dynamically allocated???

@DavidTompkins
Copy link
Member

Greetings,

That buffer is statically allocated as an efficiency mechanism based on
profiling of the library in action. If your use case for kafka uses
variable-sized messages with a wide distribution, then you probably want to
implement a malloc pool.

Related to this, an upcoming update to libkafka will allow this buffer size
to be passed in as a parameter so that you can specify this as part of your
message creation code.

Best of luck.

-DT

On Thu, Dec 19, 2013 at 12:31 PM, winbatch [email protected] wrote:

After I increased the DEFAULT_BUFFER_SIZE fields in Connection.h and
Packet.h the problem got fixed. However, shouldn't this be dynamically
allocated???


Reply to this email directly or view it on GitHubhttps://github.com//issues/4#issuecomment-30963723
.

@winbatch
Copy link
Author

Yes, that would be better, especially since messages I might send could
theoretically be binary/not null terminated - so being able to pass the
length would help with that as well.

On Fri, Dec 20, 2013 at 1:45 PM, David Tompkins [email protected]:

Greetings,

That buffer is statically allocated as an efficiency mechanism based on
profiling of the library in action. If your use case for kafka uses
variable-sized messages with a wide distribution, then you probably want
to
implement a malloc pool.

Related to this, an upcoming update to libkafka will allow this buffer
size
to be passed in as a parameter so that you can specify this as part of
your
message creation code.

Best of luck.

-DT

On Thu, Dec 19, 2013 at 12:31 PM, winbatch [email protected]
wrote:

After I increased the DEFAULT_BUFFER_SIZE fields in Connection.h and
Packet.h the problem got fixed. However, shouldn't this be dynamically
allocated???


Reply to this email directly or view it on GitHub<
https://github.com/adobe-research/libkafka/issues/4#issuecomment-30963723>

.


Reply to this email directly or view it on GitHubhttps://github.com//issues/4#issuecomment-31031848
.

@ShikhaSrivastava
Copy link

Hi,

I have been working on libkafka for a while now and I discovered this issue and saw that you mentioned that it will be there in the next release. I implemented a dynamic buffer allocation solution to this where the buffer size is passed as an argument to the ProduceRequest api call. And the calculation of the buffer size takes into consideration the size of each message in the messageArray.
I tested it with more than 10,00,000 google protocol buffer messages and it works fine.

I'd be happy to share the code with the community so that it can be improved further and can be tested to its extremes.

Thanks,
Shikha

avatli added a commit to avatli/libkafka that referenced this issue Nov 4, 2015
avatli added a commit to avatli/libkafka that referenced this issue Nov 4, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants