Drew Scott Daniels' Blog Personal, usually technical posts

January 4, 2016

Weak Yet Strong: Rijndael’s security in today’s environment

Filed under: technical — admin @ 7:36 pm

From my undergraduate writings in November, 2000 (quoted word for word). I’ll post my impressions in a later post.

Weak Yet Strong:

Rijndael’s security in today’s environment

The Rijndael algorithm was chosen as the final candidate for the Advanced Encryption Standard by the Advanced Encryption Standard group (AES). Rijndael was one of the five finalists that were chosen. The other finalists include RC6, Serpent, Twofish and MARS. There are many disadvantages and advantages for the Rijndael algorithm. Rijndael is likely the best choice for todays security needs.

The Rijndael algorithm is an algorithm that uses multiple rounds that deal with arrays of input bytes. There are four layers to most of the rounds except for the last round which does not contain any mixing. The first layer is a simple S-box transformation. The second and third layers are mixing layers that the rows of the array are shifted and the columns mixed. In the fourth layer bytes from the key are XORed into each byte in the array.

Possible weaknesses

“The best way to hide something is in plain sight.” – unattributed

The AES spent a large amount of time analysing and talking about the five finalist algorithms. there were several vulnerabilities in reduced versions of Rijndael as well as some merely possible minor weaknesses in the complete algorithm. There was quite a bit of data that went into the AES conferences and security of the chosen algorithm was not the only goal. The speed of the algorithms may have lured the AES into choosing a less secure algorithm.

Many possible weaknesses in Rijndael may lead to real weaknesses and there were several analysis done to suggest were weaknesses may be found. It is important to note that any weakness in one part of the algorithm does not mean that the whole algorithm is weak, but nevertheless any weakness can provide a starting point for cryptanalysists. Many parts of the algorithm provide for the possibility that the cipher text will not be adequately enciphered.

The number of rounds can be as low as 10 [1] leaving the possibility of too few permutations of the inputted bits. Serpent uses 32 rounds [2] to be more thorough. Of the five AES finalists only Twofish uses anywhere near as few rounds. Twofish uses only 16 rounds [2. At the most Rijndael uses 14 rounds [2].

The fact that Rijndael derives a larger key from the key that is given [1] is suspicious. Little additional security is gained from making the key larger unless it can be shown that the new key is adequately unique. There could possibly be patterns in the larger keys that could make cryptanalysis easier by making it easier to eliminate possible messages. The larger key may make some of the rounds in effect worthless if it is known what the rounds are doing without knowning the key. If the probabilities, without knowning the key, of what the rounds are doing are not adequately even and unique then a heuristic can be built to attack a message faster than through an exhaustive search. The column mixing algorithm [1] may do the same mixing given different keys. Doing the same thing given different keys is known as collisions. the possibility of collisions ideally would be zero for every part of the algorithm.

It was suggested that attacks on the Rijndael algorithm could be built from the Square attack [2]. The biggest reason for the dismissal of attacks built from the Square attack was that 7 and 8 round variants require almost the entire codebook [2]. It was said that even 9 rounds could be vulnerable to a related key attacks [2].

With several old attacks being partially effective on reduced versions of Rijndael, the possibility of collisions in the mixing algorithm, possible collisions in the rounds due to an enlarged key and few rounds there coudl be the possibility of a huge time savings for a cryptanalysists to attack messages encrypted with the Rijndael algorithm. The AES itself said that Rijndael provides an adequate security while saying that three of the other five finalist algorithms provide high security [2]. The tests done with Rijndael were not equal in all ways to the tests done on the other four candidates, perhaps due to the fact that it is a unqiue algorithm or perhaps results were simply not available at the time.

Possible strengths

The Rijndael algorithm went through many tests and many analysis, the difficulty of finding a heuristic to beat the time for an exhaustive search is very high. Many different things are done in the Rijndael algorithm to try and make it far less vulnerable to known heuristics. Analysis were done using many known heuristics and showed that the Rijndael algorithm is very security with today’s knowledge and computing power.

“Rijndael does not have Feistel structure” ([1] section 3 page 8). Many algorithms including several of the AES finalist algorithms use the Feistel structure. The Feistel structure has been around for a while and has been widely analysed and thus it is felt that the structure may be easier to attack than other structures. Rijndael has a uniform structure, but the intermediate states are not simply transposed unchanged [1]. the changing of the states make Rijndael more resistant against linear and differential cryptanalysis [1].

the fact that the number of rounds is not constant [1] may give the Rijndael algorithm an advantage over other algorithms. If the block and key sizes are not known then there are many more different decryptions that would be necessary to do an exhaustive search.

Variable shift offsets [1] in the rows add quite a few more possible transformations when combined with column mixing. With simply a mixing of the columns there is still the possibility of leaving cryptanalysists with the ability to do statistical analysis on the columns. When row shifting is combined with column mixing the cryptannalists must first figure out the row shift. If the row shifts were constant it would be much easier to cryptanalyse.

there were many tests done on many architectures with many configurations and Rijndael was shown to have far longer time necessary to decrypt messages than encrypt them [1]. Having a longer decryption time is very advantageous as it makes any search using the algorithm take linearly more time. With a longer decryption time exhaustive searches start to become unreasonable a great deal faster.

It was worried that there may be trap doors built into the chosen AES algorithm [2]. It was said that Rijndael can easily use a different S-box thus relieving some possible fears that there may be back doors in it’s S-box.

Being a bit level algorithm that dealt at the byte level for many of its operations Rijndael was shown to be quite quick. Fewer rounds also speed up the algorithm. It is not all that useful to have a high security algorithm when it “eats up” too many resources.

Rijndael has many advantages that make it a good choice to be the Advanced Encryption Standard. It is fast and has been shown to be very reasonably secure. It was anticipated that a new standard would need to be chosen for the distant future so many of the possible holes in Rijndael were not perceived to be a threat to security. It is anticipated that Rijndael will go through its entire life as an algorithm that is secure against all attacks. Rijndael will give security in many different applications, most importantly it is efficient on much of the hardware and software of today.

References:

[1] AES Proposal: Rijndael, 1999, Daemen and Rijmen 45 pages (Rijndael.pdf)

[2] Report on the Development of the Advanced Encryption Standard, 1999 Nechvatal et al, 116 pages (r2report.pdf)

The above wouldn’t be what I’d write today, but gives a good reference that I wanted to write out word for word. It includes typographical errors to avoid inadvertently changing the meaning from correcting them.

My resume is at http://www.boxheap.net/ddaniels/resume.html

November 30, 2015

NEO mining

Filed under: technical,Uncategorized — admin @ 8:49 pm

From my paper notes from years ago:
* Find Near Earth Objects (NEO) catalog(s)
* Determine what is of value [e.g. precious metals]
* Make qualifications list
* Find candidate and rank by value, cost (capture and building), and both
* Pick optimal targets
* Devise methods of capture & construction
* Present plans to backer(s)

I recall thinking that near earth asteroids would take less fuel for transport of, to and from. I also recall thinking and that asteroids have low gravity which makes it easier to escape from.

I was holding back on publishing this post, but now seems appropriate given the recent passing of the US “2015 Space Act”. Details are at: https://en.wikipedia.org/wiki/Asteroid_mining#Legal_regimes

Other recent news includes NASA’s Near Earth Asteroid Scout (NEAScout http://www.jpl.nasa.gov/cubesat/missions/neascout.php ) which is going to launch as a secondary payload on the 2018 Space Launch System (SLS) Exploration Mission 1 (EM-1) (story at: http://www.nasaspaceflight.com/2015/11/nasa-identifies-secondary-payloads-sls-em-1/ ). This mission should help figure out what other Near Earth Asteroids are made of.

Drew Daniels
Resume: http://www.boxheap.net/ddaniels/resume.html

November 22, 2015

Minimal backups of a Debian or Ubuntu system

Filed under: Data Compression,technical,Uncategorized — admin @ 11:36 pm

Storage has become so cheap for me that I now use bacula and do full backups based on a broad file set definition. For offsite backups I focus on a smaller set of more important data. At work there are different kinds of requirements and restrictions too.

Back in 2002 I was thinking about minimal backups and I was dealing with quite a bit more software not following the Filesystem Hierarchy Standard A post about my thoughts is at https://lists.debian.org/debian-devel/2002/07/msg02232.html with Message-id <Pine.GSO.4.40.0207311107350.16701-100000@castor.cc.umanitoba.ca>. The same kind of methods can be used on Ubuntu too.

Example minimal backup scripts for Debian:
http://www.linux-backup.net/scripts/aaron.sh.txt
http://qref.sourceforge.net/Debian/reference/examples/backup

I wouldn’t ignore files that I installed that are in special directories and are not packages. The package called cruft or perl program Debarnacle (which can be installed using cpan) can be used to find files that are not in packages. The above scripts may not be smart enough to backup everything. “cruft” has some problems when being used to figure out what files to backup, this is because it is designed to find files that can be removed. Removing unnecessary files and packages is a good first step before a backup, but don’t delete anything you’re unsure about.

I forget all my difficulties in using “cruft” to assist in creating a backup, but I’ll try to recall. One problem was a feature that it ignored directories that it knew would have files that weren’t part of packages like /home. An inconvenience was that I had to sort through all the files that it said should be there, but weren’t (perhaps files to delete after a reinstall of packages, although some I should have had). Another problem was that I had to force it to skip some of the other file systems I had mounted like CD’s and my dos partitions. “cruft” seems to keep a large backup cache of it’s database, this was sometimes helpful, sometimes I needed to delete it (maybe I did to save disk space as I remember it being big).

Debian backup instructions, and backup information (obsolete and deprecated since 2006):
http://www.debian.org/doc/manuals/system-administrator/ch-sysadmin-backup.html
Replaced with:
http://www.debian.org/doc/manuals/debian-reference/ch10.en.html#_backup_and_recovery

“dpkg –get-selections>list_of_selections.txt” can backup a Debian user’s list of currently selected packages. “dpkg –set-selections<list_of_selections.txt” to start a system restore, iff packages are available. Some packages get removed from the Debian archive (http://ftp-master.debian.org/removals.txt has recent examples), if following unstable or testing, then you may want to use dpkg-repack or grab a copy of the packages you are worried about. http://archive.debian.org may also be helpful.

A simple rsync, ssh, cron solution can do regular incremental backups, but might ignore files that are easily available (like packages on cd or packages available by a quick download).

Discussion on backing up a Debian system:
http://wayback.archive.org/web/20020821185115/http://www.debianplanet.org/node.php?id=586

My Debian backup steps:
1. I remove files and packages that I’m sure I don’t want or need (deborphan can help me figure this out. I later learned to use aptitude and apt-cache to help find reverse dependancies)

2. I run “cruft” first on all directories, trying to avoid special devices or removing checks on directories where it seems to stall (not the best way, but the best way available right now). It would also be nice to use the md5 signatures for files to see if I manually changed a file in the file system, I’d of course want to back those up (watching for hacks of course).

3. I remove files that I’m sure I don’t want or need that are indicated to me by cruft. (possibly also removing them from cruft’s report file or files to save from doing step 7)

4. I fix the list of missing files indicated from cruft’s report(s). I also possibly install packages that have files installed, but the package isn’t listed as installed for some reason (not likely to be able to skip step 7 if I do this second part).

5. I look for packages that are not available for download or available in a reliable location (Usually all obsolete packages listed in dselect, sometimes more, some obsolete packages have -src packages that they can be
built from). I then backup these packages using dpkg-repack, or if available, I grab a copy of the proper package (dpkg-repack doesn’t create packages back to the way they were originally. http://archive.debian.org may be useful to find packages that are not in the archive anymore.).

6. I run “dpkg –get-selections>/root/myselections.txt” (this file is important to backup unless you want to go through the list of packages to install again, step 7 should catch this, or you can add it manually if you skip step 7).

7. I re-run “cruft” the same way I ran it for step 2.

8. Go through cruft’s report and remove any information that I don’t want backup (maybe keep a copy of the missing files separate).

9. Create a list of files to backup using cruft’s report as a good guide (remembering to include myselections.txt and any important packages). /etc and all the conf file information under /var/lib/dpkg/info may be good to have (not including unnecessary files would be nice).

The rest of the steps do not allow for incremental backups, and may be modified to allow incremental backups.

10. Append “tar -af backupfile.tar ” in a text file, at the beginning of every line that lists a file to backup except the first one which I do “tar -cf backupfile.tar” (tr may be helpful, but what’s the proper command? I’d prefer to avoid perl, but is it more common than tr? If so what’s the proper perl -e line?). I then make the text file executable and execute it.

11. I ran “bzip2 -9 backupfile.tar”. (p7zip might be one of the best choices for the time of this post)

12. I used xcdroast or cdrecord or something else to burn backupfile.tar to a recoradable cd. Brasero is more commonly used these days and burning to a DVD though USB and hard drives are getting more common. I currently use a porable USB hard drive and network attached storage.

I appreciate the help I’ve gotten so far in generating a program/script to automate these steps (and getting it made into an uploaded package). Help in figuring out a good way to make an incremental backup would be very useful. I think there must be a nice way to do it with tar and freshen, or using the archive bit.

Some packages get removed and the latest version isn’t publicly archived. I don’t like that unmaintained or any potentially installed packages are removed from the Debian archives. I brought this up with the qa group and was refered to http://snapshot.debian.org and “the morgue” which was on auric but now seems to be at merkel.debian.org:/org/ftp.debian.org/morgue/

I worry that not all packages will have md5 signatures of their files. I remember having a problem with many files not having md5 signatures, or even some packages having incorrect md5 signatures. I think *every* file that comes in a .deb should have an md5 signature that gets installed when the corresponding package gets installed. I wanted this to be a policy must as it’s good for backups, and detecting system corruptions (hacks and modifications that are intentional or malicious or accidental). md5 signatures should take little effort to create and maintain with an archive. This would help with tripwire or aide like setups (usually designed for file system based host intrusion detecion systems also called host IDS’s or HIDS).

Drew Daniels

Resume: http://www.boxheap.net/ddaniels/resume.html

May 31, 2011

ReactOS for hardware companies and GNU/kwin32

Filed under: technical — admin @ 9:21 pm

ReactOS can be useful for hardware companies looking for a cheap simple platform to show off their hardware when they only have a driver for Microsoft Windows.

Linux tools may someday be even easier to get for Windows though the Debian project. Cygwin and MinGW are good, but Debian has more resources to help maintain a larger number of programs. The Debian win32 mailing list has discussed several efforts for the port.

ReactOS is an Open Source operating system that aims for Windows compatibility. I used to work at Linear Systems Ltd., a computer engineering company that mainly developed hardware for the broadcast industry, but had specialized software as well. While working there I got increasingly interested in ReactOS.

Why my interest in ReactOS?

  • It’s free.
  • It’s Open Source.
    • It’s easier to debug low level issues when you have all the source code.
    • It’s easier to customize low level options when you have all the source code and can search and modify it.
    • Working on it I can give back to the community that gives to me.
  • Contributing and using ReactOS might make tools only available on, or more easily available from Linux also be available on Microsoft Windows (as I sometimes have to use Microsoft Windows).

To answer my questions in my notes:

  • ReactOS does support WDM drivers.
  • I never did find out if the Linear Systems driver would work on ReactOS which is a pity. I believe the driver is a free download so someone with hardware can give it a try.
  • The Linear Systems NDIS driver is likely now unsupported as there are no new parts needed to build the required hardware.
  • debian-win32 is largely a dead list now. I still periodically check the list archives. There have been some interesting new efforts to port dpkg and apt to Windows, but none that have done both using MinGW. Many of the supposed technical problems have either been removed in newer versions of Windows, or perhaps didn’t exist. This however seems like a different post.
  • It seems mingw is the way to go to have glibc support for Microsoft Windows and ReactOS.
  • Debian GNU/win32 may be more possible given the success of the Debian GNU/kFreeBSD port that uses glibc on the FreeBSD kernel.
  • To get Debian onto the win32 kernel, the base packages need to be updated. There are existing ports of dpkg to windows through at least cygwin. The Emdebian project may have made this easier with simplifying what needs to be ported with projects like Emdebian Crush that puts off issues like porting a vanilla Perl to Windows. I should not however that there is a project to port vanilla Perl to windows called: Vanilla Perl.
  • ReactOS licenses allow it to be used for commercial purposes.
  • ReactOS can support ext2 through existing 3rd party Windows drivers.
  • My notes dated “Monday, January 26th, 2004” are as follows:


    ReactOS

    • Does it support WDM?
    • Does it suppport our DVB driver [ed note: Linear Systems’ PCI card driver]?
    • Does it support our NDIS driver [ed note: Linear Systems’ T1 card driver]?
  • Suggested on debian-win32 that GLibc maintainers wouldn’t support patches for MS Windows as it’s Closed Source. Investigate if they’d support ReactOS.
  • Can Debian be made to boot a ReactOS kernel?
  • Can ReactOS be componentized for Debian?
  • What things need to be done to get Debian to support ReactOS? (triplet?)
  • What are the licences on ReactOS?
  • Can ReactOS support ext2? ext3?

  • May 25, 2011

    Locality of Reference

    Filed under: Data Compression,technical — admin @ 11:52 pm

    In my studies of Computer Science I learned about the principle of locality which might be better called locality of reference. Two basic kinds of locality are temporal locality (locality in time) and spacial locality (locality in space). The theory basically says that similar things will group together. We see the same principle many places in daily life and other disciplines.

    Some examples of locality include:

    • people speaking the same language tend to group together,
    • wheat fields being on land close together,
    • forests having many of the same species of tree,
    • minerals like gold being in high concentrations in certain areas,

    In data compression, locality is important to reduce context windows to be small enough to fit in memory, to reduce context windows to reduce processing. A context is a kind of locality. A window is a term meaning the area being looked at (or evaluated). Many compression algorithms use a sliding window. A sliding window is a view of several blocks of data that shifts such that when one block is done being evaluated, the window moves one block.

    Why am I talking about locality of reference in data compression? On April 14th, 2004 (2004/04/14), I wrote the following note to myself:


    • duplicate files
    • principle of locality
      • files by time
      • files by directory
      • files by type

    n(n-1)=0(n^2) Every file in front of every other file can be done in parallel.


    This means that for file ordering in an archive, there are some short cuts to finding the optimal order that can take advantage of multi-processor systems. Now checks can also take better advantage of increased parallelism and faster random access provided by solid state disk drives (SSD’s).

    In the above note, O(n^2) is Big O notation for order n squared. That means for every extra unit of input, the processing time roughly takes twice as long. This is a simple exponential curve.

    For further references look up “locality of reference”, “sliding window” compression, “parallel processing”, and “Big O notation”. Also see my evolving nots on data compression including some future information on “Drew Daniels’ compression magic”.

    Drew Daniels

    Powered by WordPress