<div dir="ltr"><div class="gmail_quote gmail_quote_container"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><p>It seems to me that in this case encoding and length travels
together? E.g. you need to encode anyway, at which point you also
know the byte size?</p>
<p>(I'm a bit unsure that here there's anything to be gained by the
method you proposed?)</p>
<p>Do you have use cases where you don't want to decode, you just
want to know the byte length?</p></div></blockquote><div>The main use-cases I've seen do want both the encoding and the length.</div><div><br></div><div>I think there is still a benefit to a fast way to get the length first. An alternative is to accumulate into a temporary buffer, and potentially have to resize it. If there are gigabytes of data it's expensive to have to make another copy. Knowing the encoded length up-front allows exactly sizing the output buffer and avoids the temporary buffer.</div><div><br></div><div>Some slightly more concrete examples:</div><div><br></div><div>Building a byte[] with all of the content of a lot of data, sizing the byte[] requires knowing the sum of all the lengths you want to put into it first and then encoding the strings into it.<br><br>Streaming serialization to the network: the top level has to know the length of the transitive contents that it's going to be writing out in the nested structures. The actual output is streamed, it never constructs a byte[] of the complete data in this scenario.</div><div><br></div><div>(There are also some public protobuf APIs that just return an encoded byte length for the data, but that is a less performance sensitive use-case.)</div></div></div>