In my first week of work in Wellington at Calcium, I'm starting to migrate the application from .NET 1.1 to .NET 2.0.
The wizard did correctly almost all of the job, and the only thing I needed to change to make it compile without too many warnings was changing the old (and obsolete) Parameters.Add with the new Parameters.AddWithValue (I think I did it 1000 times).
Once it compiled we had a very strange problem: we are sending username and password, encrypting them using TripleDES, then serializing as string and then sending over the wire with remoting. And of course decrypting them on the server side.
But when they arrived on the other side everything was messed up. Hours and hours of searches on Google brought me to a page on MSDN: CLR Run-Time Breaking Changes.
The problem that was affecting us was the the following:
Short Description: Encoding.GetBytes() may not emit unpaired high or low surrogate characters for certain encodings (e.g. UTF-8 Encoding and UnicodeEncoding).
Description: For Unicode standard compliance, Encoding.GetBytes() will not emit bytes if there is an unpaired or out of order surrogate. This is most obvious if the caller call GetBytes() with one high or low surrogate. In this case, UTF8Encoding and UnicodeEncoding will not emit nothing.
The Unicode 4.0 requires that compliant applications not emit unpaired surrogates. In v1.1, GetBytes() will emit bytes for lone surrogates if the encoding supports it (such as UTF-8 and UnicodeEndcoding). However, this leads CLR not to be Standard compliance with Unicode 4.0.
The change can break application's assumption about that GetBytes() will emit leading high surrogates or mismatched surrogates. BinaryWriter.Write(char ch) is one example of being broken.
User Scenario: If the application assumes that GetBytes() will emit high or surrogate if it is called with one surrogate (1/2 of the pair) at a time, or will emit a surrogate at the end of the character buffer, they may lose the ability to correctly generate surrogate pairs.
To make long story short, what the above means is that the GetByte method will work only if it is dealing with real string and not just sequence of random bytes.
That is what we used in our encryption code: we encrypted the password (the encryption method returns an array of bytes) and the serialized to a Unicode string. But since now the GetByte method will throw errors in case of "strange" strings we were not able to deserialized the string correctly.
Looking around more I found a blog post that gave me the solution to solve the problem: just serialize and deserialize using a Base64 string. So, the quick fix is, instead of serializing with:
byte[] encrypted = memoryStream.ToArray(); return Encoding.Unicode.GetString(encrypted);
and then deserializing with:
byte[] rawData = Encoding.Unicode.GetBytes(cipher);
You have to change with the following.
You serialize with:
byte[] encrypted = memoryStream.ToArray(); return Convert.ToBase64String(encrypted);
and deserialize with:
byte[] rawData = Convert.FromBase64String(cipher);
So, you are still working on 1.1, change your ciphertext serialiazation format to Base64 before converting to 2.0. And if you are just looking why your encryption code stopped working on the 2.0... well, now you know why.