27

I am working with Amazon S3 uploads and am having trouble with key names being too long. S3 limits the length of the key by bytes, not characters.

From the docs:

The name for a key is a sequence of Unicode characters whose UTF-8 encoding is at most 1024 bytes long.

I also attempt to embed metadata in the file name, so I need to be able to calculate the current byte length of the string using Python to make sure the metadata does not make the key too long (in which case I would have to use a separate metadata file).

How can I determine the byte length of the utf-8 encoded string? Again, I am not interested in the character length... rather the actual byte length used to store the string.

Nakilon
  • 34,866
  • 14
  • 107
  • 142
user319862
  • 1,797
  • 2
  • 24
  • 32

3 Answers3

41
def utf8len(s):
    return len(s.encode('utf-8'))

Works fine in Python 2 and 3.

Dietrich Epp
  • 205,541
  • 37
  • 345
  • 415
  • 1
    Thanks. I also found a website that shows you how to do it in several languages here: http://rosettacode.org/wiki/String_length#Byte_Length_49 – user319862 Jul 16 '11 at 02:27
12

Use the string 'encode' method to convert from a character-string to a byte-string, then use len() like normal:

>>> s = u"¡Hola, mundo!"                                                      
>>> len(s)                                                                    
13 # characters                                                                             
>>> len(s.encode('utf-8'))   
14 # bytes
Mark Reed
  • 91,912
  • 16
  • 138
  • 175
7

Encoding the string and using len on the result works great, as other answers have shown. It does need to build a throw-away copy of the string - if you're working with very large strings this might not be optimal (I don't consider 1024 bytes to be large though). The structure of UTF-8 allows you to get the length of each character very easily without even encoding it, although it might still be easier to encode a single character. I present both methods here, they should give the same result.

def utf8_char_len_1(c):
    codepoint = ord(c)
    if codepoint <= 0x7f:
        return 1
    if codepoint <= 0x7ff:
        return 2
    if codepoint <= 0xffff:
        return 3
    if codepoint <= 0x10ffff:
        return 4
    raise ValueError('Invalid Unicode character: ' + hex(codepoint))

def utf8_char_len_2(c):
    return len(c.encode('utf-8'))

utf8_char_len = utf8_char_len_1

def utf8len(s):
    return sum(utf8_char_len(c) for c in s)
Mark Ransom
  • 299,747
  • 42
  • 398
  • 622
  • 3
    Note that in exchange for not making a copy this takes about 180x as long as `len(s.encode('utf-8'))`, at least on my python 3.3.2 on a string of 1000 utf8 characters [generated from the code here](http://stackoverflow.com/a/1477572/344821). (It'd be of comparable speed if you wrote the same algorithm in C, presumably.) – Danica Sep 24 '13 at 00:51
  • @Dougal, thanks for running the test. That's useful information, essential for evaluating possible solutions. I had a feeling it might be slower, but didn't know the magnitude. Did you try both versions? – Mark Ransom Sep 24 '13 at 01:20
  • 1
    The version with `utf8_char_len_2` is about 1.5x slower than `utf8_char_len_1`. Of course, we're talking about under a millisecond in every case, so if you're just doing it a few times it doesn't matter at all: 2 µs / 375 µs / 600 µs. That said, copying 1kb of memory is also unlikely to matter either. :) – Danica Sep 24 '13 at 01:47