| Welcome to Crypto. We hope you enjoy your visit. You're currently viewing our forum as a guest. This means you are limited to certain areas of the board and there are some features you can't use. If you join our community, you'll be able to access member-only sections, and use many member-only features such as customizing your profile, sending personal messages, and voting in polls. Registration is simple, fast, and completely free. Join our community! If you're already a member please log in to your account to access all of our features: |
| Measure and Compare Entropy | |
|---|---|
| Tweet Topic Started: Dec 2 2015, 05:25 PM (1,229 Views) | |
| Karl-Uwe Frank | Dec 2 2015, 05:25 PM Post #1 |
|
NSA worthy
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Uploaded a tiny routine to measure and compare the entropy of given values from a random source as text and hex string.If the hex value get longer the difference between a text string and a hex value in the resulting entropy increased noticable. The source code for the entropy check can be downloaded from here http://www.freecx.co.uk/utils/ In order to compile you need the PureBasic Demo for either Windows, Linux or Mac OS X which you can download for free at http://www.purebasic.com/download.php Cheers, Karl-Uwe P.S.: Just figured that the map has to be reset before the next test run. Uploaded the fixed source code. Edited by Karl-Uwe Frank, Dec 2 2015, 07:37 PM.
|
|
cHNiMUBACG0HAAAAAAAAAAAAAABIZVbDdKVM0w1kM9vxQHw+bkLxsY/Z0czY0uv8/Ks6WULxJVua zjvpoYvtEwDVhP7RGTCBVlzZ+VBWPHg5rqmKWvtzsuVmMSDxAIS6Db6YhtzT+RStzoG9ForBcG8k G97Q3Jml/aBun8Kyf+XOBHpl5gNW4YqhiM0= | |
![]() |
|
| Replies: | |
|---|---|
| Karl-Uwe Frank | Feb 26 2016, 06:41 PM Post #31 |
|
NSA worthy
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
For the "lazy birds" another, more drastic example how entropy increases when using the hex values instead a hex value as text string
Edited by Karl-Uwe Frank, Feb 26 2016, 06:47 PM.
|
|
cHNiMUBACG0HAAAAAAAAAAAAAABIZVbDdKVM0w1kM9vxQHw+bkLxsY/Z0czY0uv8/Ks6WULxJVua zjvpoYvtEwDVhP7RGTCBVlzZ+VBWPHg5rqmKWvtzsuVmMSDxAIS6Db6YhtzT+RStzoG9ForBcG8k G97Q3Jml/aBun8Kyf+XOBHpl5gNW4YqhiM0= | |
![]() |
|
| mok-kong shen | Feb 27 2016, 10:15 AM Post #32 |
|
NSA worthy
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Could you perhaps with your software (roughly) confirm Shannon's result I mentioned (it is to be found in HAC)? |
![]() |
|
| Karl-Uwe Frank | Feb 27 2016, 12:20 PM Post #33 |
|
NSA worthy
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
What do you mean by "(it is to be found in HAC)"? |
|
cHNiMUBACG0HAAAAAAAAAAAAAABIZVbDdKVM0w1kM9vxQHw+bkLxsY/Z0czY0uv8/Ks6WULxJVua zjvpoYvtEwDVhP7RGTCBVlzZ+VBWPHg5rqmKWvtzsuVmMSDxAIS6Db6YhtzT+RStzoG9ForBcG8k G97Q3Jml/aBun8Kyf+XOBHpl5gNW4YqhiM0= | |
![]() |
|
| mok-kong shen | Feb 27 2016, 12:49 PM Post #34 |
|
NSA worthy
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
@Karl-Uwe Frank: I meant in case you doubt my memory of the result of Shannon, you could look up in HAC and from there also obtain the corresponding work of Shannon, if necessary.
Edited by mok-kong shen, Feb 27 2016, 12:50 PM.
|
![]() |
|
| Karl-Uwe Frank | Feb 27 2016, 01:05 PM Post #35 |
|
NSA worthy
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
@mok-kong shen : I don't doubt your memory, I simply don't know what "HAC" stands for. |
|
cHNiMUBACG0HAAAAAAAAAAAAAABIZVbDdKVM0w1kM9vxQHw+bkLxsY/Z0czY0uv8/Ks6WULxJVua zjvpoYvtEwDVhP7RGTCBVlzZ+VBWPHg5rqmKWvtzsuVmMSDxAIS6Db6YhtzT+RStzoG9ForBcG8k G97Q3Jml/aBun8Kyf+XOBHpl5gNW4YqhiM0= | |
![]() |
|
| Karl-Uwe Frank | Feb 27 2016, 02:46 PM Post #36 |
|
NSA worthy
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
It occur to me that it might be useful to illustrate it in binary form why a hex text string can newer hold as much entropy as the same hex value. Lets take this hex value: EC8A89EC 8EADE7A9 BBE6A782 EAB783E6 AB83E99F 82EAB1BE which reads as a hex text string in the range 0..F in binary form 010001010100001100111000010000010011100000111001010001010100001100111000 010001010100000101000100010001010011011101000001001110010100001001000010 010001010011011001000001001101110011100000110010010001010100000101000010 001101110011100000110011010001010011011001000001010000100011100000110011 010001010011100100111001010001100011100000110010010001010100000101000010 001100010100001001000101 and as hex values in the range 0x00..0xFF reads in binary form 111011001000101010001001111011001000111010101101111001111010100110111011 111001101010011110000010111010101011011110000011111001101010101110000011 111010011001111110000010111010101011000110111110 A much shorter binary value, which could possibly mislead to the assumption that the first one might contain more entropy than the latter. No lets test them both, first the large text string pypy checkentropy.py 010001010100001100111000010000010011100000111001010001010100001100111000 010001010100000101000100010001010011011101000001001110010100001001000010 010001010011011001000001001101110011100000110010010001010100000101000010 001101110011100000110011010001010011011001000001010000100011100000110011 010001010011100100111001010001100011100000110010010001010100000101000010 001100010100001001000101 Text string to check = 010001010100001100111000010000010011100000111001010001010100001100111000 010001010100000101000100010001010011011101000001001110010100001001000010 010001010011011001000001001101110011100000110010010001010100000101000010 001101110011100000110011010001010011011001000001010000100011100000110011 010001010011100100111001010001100011100000110010010001010100000101000010 001100010100001001000101 ---------------------------------------------------------- Entropy = 0.954434002925 next the far shorter hex value pypy checkentropy.py 111011001000101010001001111011001000111010101101111001111010100110111011 111001101010011110000010111010101011011110000011111001101010101110000011 111010011001111110000010111010101011000110111110 Text string to check = 111011001000101010001001111011001000111010101101111001111010100110111011 111001101010011110000010111010101011011110000011111001101010101110000011 111010011001111110000010111010101011000110111110 ---------------------------------------------------------- Entropy = 0.988699408288 Are these the results we'd expected? From my point of view, yes. Edit: This is simply because the binary value of the larger text string is not diverse enough. This of course could be proven by the fact that the text string can only consist of binary values in the range 0 = 00110000 1 = 00110001 2 = 00110010 3 = 00110011 4 = 00110100 5 = 00110101 6 = 00110110 7 = 00110111 8 = 00111000 9 = 00111001 A = 01000001 B = 01000010 C = 01000011 D = 01000100 E = 01000101 F = 01000110 whilst the hex value can consist of the full hex range 00000000 ... 11111111 Edited by Karl-Uwe Frank, Feb 27 2016, 03:31 PM.
|
|
cHNiMUBACG0HAAAAAAAAAAAAAABIZVbDdKVM0w1kM9vxQHw+bkLxsY/Z0czY0uv8/Ks6WULxJVua zjvpoYvtEwDVhP7RGTCBVlzZ+VBWPHg5rqmKWvtzsuVmMSDxAIS6Db6YhtzT+RStzoG9ForBcG8k G97Q3Jml/aBun8Kyf+XOBHpl5gNW4YqhiM0= | |
![]() |
|
| mok-kong shen | Feb 27 2016, 03:05 PM Post #37 |
|
NSA worthy
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
I am currently more interested in whether your software could (approximately) replicate the mentioned result of Shannon, for that could, I surmise, eventually well indicate the good quality of your software.
Edited by mok-kong shen, Feb 27 2016, 03:05 PM.
|
![]() |
|
| Karl-Uwe Frank | Feb 27 2016, 03:30 PM Post #38 |
|
NSA worthy
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Sorry to ask, but where do I find these results of Shannon you mention? And please don't get me wrong, the fomulae for testing the entropy comes from over here http://rosettacode.org/wiki/Entropy#Python:_More_succinct_version what I have mentioned in the source code as well. In general http://rosettacode.org/ is quite a good place if you don't want reinventing the wheel. You can find dozens of useful implemations of formulae to solve different kind of problems. |
|
cHNiMUBACG0HAAAAAAAAAAAAAABIZVbDdKVM0w1kM9vxQHw+bkLxsY/Z0czY0uv8/Ks6WULxJVua zjvpoYvtEwDVhP7RGTCBVlzZ+VBWPHg5rqmKWvtzsuVmMSDxAIS6Db6YhtzT+RStzoG9ForBcG8k G97Q3Jml/aBun8Kyf+XOBHpl5gNW4YqhiM0= | |
![]() |
|
| mok-kong shen | Feb 27 2016, 05:31 PM Post #39 |
|
NSA worthy
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Sorry that my memory is not quite exact. HAC doesn't mention source and has on p.246 only the following: The estimated average amount of information carried per character (per character entropy) in meaningful alphabetic text is 1.5 bits. On the other hand, I found on p.234 of Schneier's Applied Cryptography: The rate of normal English takes various values between 1.0 bits/letter and 1.5 bits/letter, for large values of N. Shannon, in [1434], said that the entropy depends on the length of the text. Specifically he indicated a rate of 2.3 bits/letter for 8-letter chunks, but the rate drops to between 1.3 and 1.5 for 16-letter chunks. (Note that the last sentence in the above citation has some relevance to an issue we discussed earlier.) |
![]() |
|
| Karl-Uwe Frank | Feb 27 2016, 09:51 PM Post #40 |
|
NSA worthy
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
To my current understanding the estimation of entropy based on Shannon's information theory could be described as: Entropy tells how much information can be found in a given range of values, were the more uncertain or random the occurrence of a character the more information it will contain. This means, the greater the diversity the more entropy can be hold. If we would have a certain alphabet were all characters would appear with equal probability we can calculate the entropy using the given formulae in the example source code I have published. Basically we first count the occurrence of every character in a given string. Then we walk through the whole possible range of the character set while calculating the proportional appearance of every found character in relation to the string length. Next we calculate the entropy by subtraction from the previous calculated entropy the current value of proportional appearance of the character * Logarithm (proportional appearance of the character) / Logarithm(2) (I hope my description sounds correct) If we now calculate the entropy for example of the alphabet from A..Z we will get 4.7004 pypy checkentropy.py ABCDEFGHIJKLMNOPQRSTUVWXYZ Text string to check = ABCDEFGHIJKLMNOPQRSTUVWXYZ ---------------------------------------------------------- Entropy = 4.70043971814 The greater the alphabet the greater the amount of entropy hold pypy checkentropy.py abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ Text string to check = abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ ---------------------------------------------------------- Entropy = 5.70043971814 But of course if we would look at a natural language and would measure the entropy the result would be lower. That is because a natural language has a certain amount of letters that appear more often. But because we are measuring keywords of some hopefully good pseudo-random quality the fact of such repentance is not of any importance and we don't need talking it into account. Moreover, as in my initial posting mentioned, I like to shed a light on the fact that a given hex value should always be treated as is and never as a text string when we use it as a keyword. Under the above described calculation properties a alphabet consisting only of the letters 0123456789ABCDEF can obviously never hold as much entropy as a hex value of the alphabet of the full byte range from 0x00..0xFF. pypy checkentropy.py 0123456789ABCDEF Text string to check = 0123456789ABCDEF ---------------------------------------------------------- Entropy = 4.0 pypy checkentropy.py 0x000102030405060708090A0B0C0D0E0F101112131415161718191A1B1C1D1E1F202122 232425262728292A2B2C2D2E2F303132333435363738393A3B3C3D3E3F40414243444546 4748494A4B4C4D4E4F505152535455565758595A5B5C5D5E5F606162636465666768696A 6B6C6D6E6F707172737475767778797A7B7C7D7E7F808182838485868788898A8B8C8D8E 8F909192939495969798999A9B9C9D9E9FA0A1A2A3A4A5A6A7A8A9AAABACADAEAFB0B1B2 B3B4B5B6B7B8B9BABBBCBDBEBFC0C1C2C3C4C5C6C7C8C9CACBCCCDCECFD0D1D2D3D4D5D6 D7D8D9DADBDCDDDEDFE0E1E2E3E4E5E6E7E8E9EAEBECEDEEEFF0F1F2F3F4F5F6F7F8F9FA FBFCFDFEFF Hex String to check = 00:01:02:03:04:05:06:07:08:09:0A:0B:0C:0D:0E:0F:10:11:12:13:14:15:16:17: 18:19:1A:1B:1C:1D:1E:1F:20:21:22:23:24:25:26:27:28:29:2A:2B:2C:2D:2E:2F: 30:31:32:33:34:35:36:37:38:39:3A:3B:3C:3D:3E:3F:40:41:42:43:44:45:46:47: 48:49:4A:4B:4C:4D:4E:4F:50:51:52:53:54:55:56:57:58:59:5A:5B:5C:5D:5E:5F: 60:61:62:63:64:65:66:67:68:69:6A:6B:6C:6D:6E:6F:70:71:72:73:74:75:76:77: 78:79:7A:7B:7C:7D:7E:7F:80:81:82:83:84:85:86:87:88:89:8A:8B:8C:8D:8E:8F: 90:91:92:93:94:95:96:97:98:99:9A:9B:9C:9D:9E:9F:A0:A1:A2:A3:A4:A5:A6:A7: A8:A9:AA:AB:AC:AD:AE:AF:B0:B1:B2:B3:B4:B5:B6:B7:B8:B9:BA:BB:BC:BD:BE:BF: C0:C1:C2:C3:C4:C5:C6:C7:C8:C9:CA:CB:CC:CD:CE:CF:D0:D1:D2:D3:D4:D5:D6:D7: D8:D9:DA:DB:DC:DD:DE:DF:E0:E1:E2:E3:E4:E5:E6:E7:E8:E9:EA:EB:EC:ED:EE:EF: F0:F1:F2:F3:F4:F5:F6:F7:F8:F9:FA:FB:FC:FD:FE:FF ---------------------------------------------------------- Entropy = 8.0 Edit: Just for completeness pypy checkentropy.py 000102030405060708090A0B0C0D0E0F101112131415161718191A1B1C1D1E1F20212223 2425262728292A2B2C2D2E2F303132333435363738393A3B3C3D3E3F4041424344454647 48494A4B4C4D4E4F505152535455565758595A5B5C5D5E5F606162636465666768696A6B 6C6D6E6F707172737475767778797A7B7C7D7E7F808182838485868788898A8B8C8D8E8F 909192939495969798999A9B9C9D9E9FA0A1A2A3A4A5A6A7A8A9AAABACADAEAFB0B1B2B3 B4B5B6B7B8B9BABBBCBDBEBFC0C1C2C3C4C5C6C7C8C9CACBCCCDCECFD0D1D2D3D4D5D6D7 D8D9DADBDCDDDEDFE0E1E2E3E4E5E6E7E8E9EAEBECEDEEEFF0F1F2F3F4F5F6F7F8F9FAFB FCFDFEFF Text string to check = 000102030405060708090A0B0C0D0E0F101112131415161718191A1B1C1D1E1F20212223 2425262728292A2B2C2D2E2F303132333435363738393A3B3C3D3E3F4041424344454647 48494A4B4C4D4E4F505152535455565758595A5B5C5D5E5F606162636465666768696A6B 6C6D6E6F707172737475767778797A7B7C7D7E7F808182838485868788898A8B8C8D8E8F 909192939495969798999A9B9C9D9E9FA0A1A2A3A4A5A6A7A8A9AAABACADAEAFB0B1B2B3 B4B5B6B7B8B9BABBBCBDBEBFC0C1C2C3C4C5C6C7C8C9CACBCCCDCECFD0D1D2D3D4D5D6D7 D8D9DADBDCDDDEDFE0E1E2E3E4E5E6E7E8E9EAEBECEDEEEFF0F1F2F3F4F5F6F7F8F9FAFB FCFDFEFF ---------------------------------------------------------- Entropy = 4.0 Edited by Karl-Uwe Frank, Feb 27 2016, 10:03 PM.
|
|
cHNiMUBACG0HAAAAAAAAAAAAAABIZVbDdKVM0w1kM9vxQHw+bkLxsY/Z0czY0uv8/Ks6WULxJVua zjvpoYvtEwDVhP7RGTCBVlzZ+VBWPHg5rqmKWvtzsuVmMSDxAIS6Db6YhtzT+RStzoG9ForBcG8k G97Q3Jml/aBun8Kyf+XOBHpl5gNW4YqhiM0= | |
![]() |
|
| mok-kong shen | Feb 27 2016, 10:41 PM Post #41 |
|
NSA worthy
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
Computing the enctropy of a string consisting of all the character of an alphabet (i.e. all possible different characters, one each) has IMHO no senisible practical value, since no practical string is like that and it is a the distribution of the diverse characters (not only the frequencies but also the positions of the characters with repect to one another) that influence the entropy of a given string. So, given a string to test for its entropy, how would you proceed? Put it into your software and get the number output from it? If so, why is that procedure not appropriate for a string of the genre studied by Shannon? Or put it the other way round, why is that procedure appropriate for a string that you use to test with your software in your own applications? (I don't yet understand your last post.)
Edited by mok-kong shen, Feb 27 2016, 10:45 PM.
|
![]() |
|
| Karl-Uwe Frank | Feb 28 2016, 01:22 AM Post #42 |
|
NSA worthy
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
If we would look at a natural language and would measure the entropy the result would be different, because we measure a natural language, which has a certain amount of letters that appear more often. But because we are measuring keywords of some hopefully good pseudo-random quality, the fact of such repentance of certain letters of a natural language is not of any importance and we don't need talking it into account. This is mainly because repentance in an unpredictable manner, which does not apply to a natural language, is one of the desired properties of randomness. Therefore we do not calculate the entropy of a natural language, because a good quality keyword is not build of a natural language. If we would do so, the keyword becomes prone to a dictionary attack. Also if we would use a simple keyword it should never be used as such, but instead we use the cryptographical hash of that keyword. If we use such a hash, it does not consist of a natural language. Therefore the measurement of the given formulae it not designed for calculating the entropy of a natural language, because we do not have such. |
|
cHNiMUBACG0HAAAAAAAAAAAAAABIZVbDdKVM0w1kM9vxQHw+bkLxsY/Z0czY0uv8/Ks6WULxJVua zjvpoYvtEwDVhP7RGTCBVlzZ+VBWPHg5rqmKWvtzsuVmMSDxAIS6Db6YhtzT+RStzoG9ForBcG8k G97Q3Jml/aBun8Kyf+XOBHpl5gNW4YqhiM0= | |
![]() |
|
| mok-kong shen | Feb 28 2016, 10:03 AM Post #43 |
|
NSA worthy
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
So your software requires "good quality" inputs, right?. But then the problem is how do you "scientifically" measure "good quality" of a given sequence before letting the sequence be input into your software. In fact, if I already know that a given sequence is sufficiently "good" (in whatever "measure" I choose to use), do I still need to test it at all?
Edited by mok-kong shen, Feb 28 2016, 10:37 AM.
|
![]() |
|
| Karl-Uwe Frank | Feb 28 2016, 04:42 PM Post #44 |
|
NSA worthy
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
There are several different ways of measuring Shannon's entropy, based on the appropriate purpose. For example if we would like to measure the Shannon entropy of a natural language, we need talking the redundancy of the most used characters of the given natural language into account. Therefor the formulae need to be adjusted accordingly in order to honour the languages typical redundancy. And because different languages tend to have different character redundancy we probably would need to adjust the formulae for all these differences. In cryptography however there is no room for any kind of natural language, except in the plaintext sometimes. In cryptography we try to accomplish that any trace of a natural language vanish, designing our encryption algorithm that it's generate such a pseudo-random output were no trace or relationship of any natural language would exist, so it can withstand any frequency analysis for example. Therefor in terms of cryptography is make no sense at all using a formulae which check for the entropy of a natural language. But the formulae in the given program examples can also be very useful in testing the entropy quality of the generated keystream or the final ciphertext. John Walker's program ENT[1] is often used for testing the quality of randomness of a given ciphertext. And here again, he uses the same formulae as in my examples, testing not the entropy of a natural language, but the entropy of available bits per byte in the range 0x00..0xFF, which we can verify in a quite test pypy checkentropy.py 0xd131dd02c5e6eec4693d9a0698aff95c2fcab58712467eab4004583eb8fb7f8955ad34 0609f4b30283e488832571415a085125e8f7cdc99fd91dbdf280373c5bd8823e3156348f 5bae6dacd436c919c6dd53e2b487da03fd02396306d248cda0e99f33420f577ee8ce54b6 7080a80d1ec69821bcb6a8839396f9652b6ff72a70 Hex String to check = D1:31:DD:02:C5:E6:EE:C4:69:3D:9A:06:98:AF:F9:5C:2F:CA:B5:87:12:46:7E:AB: 40:04:58:3E:B8:FB:7F:89:55:AD:34:06:09:F4:B3:02:83:E4:88:83:25:71:41:5A: 08:51:25:E8:F7:CD:C9:9F:D9:1D:BD:F2:80:37:3C:5B:D8:82:3E:31:56:34:8F:5B: AE:6D:AC:D4:36:C9:19:C6:DD:53:E2:B4:87:DA:03:FD:02:39:63:06:D2:48:CD:A0: E9:9F:33:42:0F:57:7E:E8:CE:54:B6:70:80:A8:0D:1E:C6:98:21:BC:B6:A8:83:93: 96:F9:65:2B:6F:F7:2A:70 ---------------------------------------------------------- Entropy = 6.57605732417 Storing the byte above in a binary file and passing this to ENT will give ENT testfile.bin Entropy = 6.576057 bits per byte. As we can see, the formulae is well designed for tests of entropy quality of any given amount of hex values, but not for any natural language. No, the software just test the quality of a given input. The main purpose of the software is to visualise the difference between a text based keyword and a keyword that consist of byte drawn from the full range of all possible byte 0x00..0xFF. Such a high entropy keyword can be achieved if we hash the keyword before passing it to the cipher. The reason for the software IS to test if there was a "good quality" input. What I can recommend however, is an integration of the formulae in any encryption software in order to check if the generated keyword really has at least an entropy greater than 5. Clearly not. [1] http://www.fourmilab.ch/random/ Edited by Karl-Uwe Frank, Feb 28 2016, 04:44 PM.
|
|
cHNiMUBACG0HAAAAAAAAAAAAAABIZVbDdKVM0w1kM9vxQHw+bkLxsY/Z0czY0uv8/Ks6WULxJVua zjvpoYvtEwDVhP7RGTCBVlzZ+VBWPHg5rqmKWvtzsuVmMSDxAIS6Db6YhtzT+RStzoG9ForBcG8k G97Q3Jml/aBun8Kyf+XOBHpl5gNW4YqhiM0= | |
![]() |
|
| mok-kong shen | Feb 28 2016, 06:09 PM Post #45 |
|
NSA worthy
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
|
But a character in any text file occupies a byte which is 2 hexs. You could anyway try and see the result of processing pieces of not too short normal texts. |
![]() |
|
| 1 user reading this topic (1 Guest and 0 Anonymous) | |
| Go to Next Page | |
| « Previous Topic · Utilities · Next Topic » |





![]](http://z2.ifrm.com/static/1/pip_r.png)



another, more drastic example how entropy increases when using the hex values instead a hex value as text string
12:32 AM Jul 11