7

I have used the below code to split the string, but it takes a lot of time.

using (StreamReader srSegmentData = new StreamReader(fileNamePath))
{
    string strSegmentData = "";
    string line = srSegmentData.ReadToEnd();
    int startPos = 0;

    ArrayList alSegments = new ArrayList();
    while (startPos < line.Length && (line.Length - startPos) >= segmentSize)
    {
        strSegmentData = strSegmentData + line.Substring(startPos, segmentSize) + Environment.NewLine;
        alSegments.Add(line.Substring(startPos, segmentSize) + Environment.NewLine);
        startPos = startPos + segmentSize;
    }
}

Please suggest me an alternative way to split the string into smaller chunks of fixed size

Adriano Repetti
  • 65,416
  • 20
  • 137
  • 208
  • `String.Split` could be one option – Mohit S Dec 23 '15 at 07:03
  • This might help: http://stackoverflow.com/questions/568968/does-any-one-know-of-a-faster-method-to-do-string-split – MusicLovingIndianGirl Dec 23 '15 at 07:04
  • we don't have any specific character to use Split, just have to separate the string based on the size (number of characters) – Shankar Anumula Dec 23 '15 at 07:06
  • 4
    Which bit takes a long time? The `srSegmentData.ReadToEnd();` or the `while` loop? Have you actually measured it? – Enigmativity Dec 23 '15 at 07:29
  • 4
    And why are you using `ArrayList`? It's so 10 years ago. – Enigmativity Dec 23 '15 at 07:30
  • If it's the allocation of a huge string in `ReadToEnd` which is slow, maybe you should be using `Read` with a buffer of size `segmentSize`, looping until the stream is exhausted. – Simon MᶜKenzie Dec 23 '15 at 07:52
  • Can you explain exactly what you're doing with the data afterwards? If, for example, you're going to write this back out to a file, the whole intermediate `ArrayList` can be eliminated. – Simon MᶜKenzie Dec 23 '15 at 07:57
  • you are also reallocating large strings in strSegmentData = strSegmentData + line.Substring(startPos, segmentSize) + Environment.NewLine; line (which also duplicates Substring call) – slawekwin Dec 23 '15 at 07:59

2 Answers2

12

First of all you should define what you mean with chunk size. If you mean chunks with a fixed number of code units then your actual algorithm may be slow but it works. If it's not what you intend and you actually mean chunks with a fixed number of characters then it's broken. I discussed a similar issue in this Code Review post: Split a string into chunks of the same length then I will repeat here only relevant parts.

  • You're partitioning over Char but String is UTF-16 encoded then you may produce broken strings in, at least, three cases:

    1. One character is encoded with more than one code unit. Unicode code point for that character is encoded as two UTF-16 code units, each code unit may end up in two different slices (and both strings will be invalid).
    2. One character is composed by more than one code point. You're dealing with a character made by two separate Unicode code points (for example Han character ).
    3. One character has combining characters or modifiers. This is more common than you may think: for example Unicode combining character like U+0300 COMBINING GRAVE ACCENT used to build à and Unicode modifiers such as U+02BC MODIFIER LETTER APOSTROPHE.
  • Definition of character for a programming language and for a human being are pretty different, for example in Slovak is a single character however it's made by 2/3 Unicode code points which are in this case also 2/3 UTF-16 code units then "dž".Length > 1. More about this and other cultural issues on How can I perform a Unicode aware character by character comparison?.
  • Ligatures exist. Assuming one ligature is one code point (and also assuming it's encoded as one code unit) then you will treat it as a single glyph however it represents two characters. What to do in this case? In general definition of character may be pretty vague because it has a different meaning according to discipline where this word is used. You can't (probably) handle everything correctly but you should set some constraints and document code behavior.

One proposed (and untested) implementation may be this:

public static IEnumerable<string> Split(this string value, int desiredLength)
{
    var characters = StringInfo.GetTextElementEnumerator(value);
    while (characters.MoveNext())
        yield return String.Concat(Take(characters, desiredLength));
}

private static IEnumerable<string> Take(TextElementEnumerator enumerator, int count)
{
    for (int i = 0; i < count; ++i)
    {
        yield return (string)enumerator.Current;

        if (!enumerator.MoveNext())
            yield break;
    }
}

It's not optimized for speed (as you can see I tried to keep code short and clear using enumerations) but, for big files, it still perform better than your implementation (see next paragraph for the reason).

About your code note that:

  • You're building a huge ArrayList (?!) to hold result. Also note that in this way you resize ArrayList multiple times (even if, given input size and chunk size then its final size is known).
  • strSegmentData is rebuilt multiple times, if you need to accumulate characters you must use StringBuilder otherwise each operation will allocate a new string and copying old value (it's slow and it also adds pressure to Garbage Collector).

There are faster implementations (see linked Code Review post, especially Heslacher's implementation for a much faster version) and if you do not need to handle Unicode correctly (you're sure you manage only US ASCII characters) then there is also a pretty readable implementation from Jon Skeet (note that, after profiling your code, you may still improve its performance for big files pre-allocating right size output list). I do not repeat their code here then please refer to linked posts.

In your specific you do not need to read entire huge file in memory, you can read/parse n characters at time (don't worry too much about disk access, I/O is buffered). It will slightly degrade performance but it will greatly improve memory usage. Alternatively you can read line by line (managing to handle cross-line chunks).

Community
  • 1
  • 1
Adriano Repetti
  • 65,416
  • 20
  • 137
  • 208
0

Below is my analysis of your question and code (read the comments)

using (StreamReader srSegmentData = new StreamReader(fileNamePath))
{
    string strSegmentData = "";
    string line = srSegmentData.ReadToEnd(); // Why are you reading this till the end if it is such a long string?
    int startPos = 0;

    ArrayList alSegments = new ArrayList(); // Better choice would be to use List<string>
    while (startPos < line.Length && (line.Length - startPos) >= segmentSize)
    {
        strSegmentData = strSegmentData + line.Substring(startPos, segmentSize) + Environment.NewLine; // Seem like you are inserting linebreaks at specified interval in your original string. Is that what you want?
        alSegments.Add(line.Substring(startPos, segmentSize) + Environment.NewLine); // Why are you recalculating the Substring? Why are you appending the newline if the aim is to just "split"
        startPos = startPos + segmentSize;
    }
}

Making all kind of assumption, below is the code I would recommend for splitting long string. It is just a clean way of doing what you are doing in the sample. You can optimize this, but not sure how fast you are looking for.

static void Main(string[] args) {
    string fileNamePath = "ConsoleApplication1.pdb";
    var segmentSize = 32;

    var op = ReadSplit(fileNamePath, segmentSize);
    var joinedSTring = string.Join(Environment.NewLine, op);
}

static List<string> ReadSplit(string filePath, int segmentSize) {
    var splitOutput = new List<string>();
    using (var file = new StreamReader(filePath, Encoding.UTF8, true, 8 * 1024 )) {
        char []buffer = new char[segmentSize];
        while (!file.EndOfStream) {
            int n = file.ReadBlock(buffer, 0, segmentSize);
            splitOutput.Add(new string(buffer, 0, n));
        }
    }

    return splitOutput;
}

I haven't done any performance tests on my version, but my guess is that it is faster than your version.

Also, I am not sure how you plan to consume the output, but a good optimization when doing I/O is to use async calls. And a good optimization (at the cost of readability and complexity) when handling large string is to stick with char[]

Note that

  • You might have to deal with Character encoding issues while reading the file
  • If you already have the long string in memory and file reading was just include in the demo, then you should use the StringReader class instead of the StreamReader class
Vikhram
  • 4,294
  • 1
  • 20
  • 32