Joe DeMesy implemented an MPI-based parallel password hashing program as the semester project for my Parallel Programming class this semester. There have been many parallel password breaking programs implemented before, but what makes Joe's implementation unique is how trial passwords are generated. In his report, he details how he used statistical analysis of known human passwords (from Sony and Grokker leaks) to prioritize what passwords are attempted. Using this prioritization, he is able to crack a password much faster than the standard random implementation.
You can find Joe's code here.
Showing posts with label algorithms. Show all posts
Showing posts with label algorithms. Show all posts
Friday, September 2, 2011
Wednesday, August 24, 2011
GO GOL: Game of Life using Go Lang
Iggy Kracji's semester project for my Parallel Programming class (CSC471) was to implement Conway's Game of Life using the Go programming language and the Google App Engine.
The purpose of the class was to design, implement and analyze programs that run on parallel architectures, so his output is not pretty, but functional.
You can check out Iggy's code and analysis, and test it yourself at the App Site.
The purpose of the class was to design, implement and analyze programs that run on parallel architectures, so his output is not pretty, but functional.
Labels:
algorithms,
Art,
artificial life,
CSC471,
game of life,
google,
parallel programming,
software robots
Sunday, September 19, 2010
Visual 6502 Processor Simulation
Greg James, Barry Silverman and Brian Silverman from Visual6502.org have been working for the last year on a transistor-level visual simulation of the 6502 microprocessor. Their work is incredibly detailed and very interesting. The 6502 design is a classic processor, and very important in computer history as it was used in the Apple I & II, Commodore, Atari and Nintendo computing systems.
In the summer of 2009, working from a single 6502, we exposed the silicon die, photographed its surface at high resolution and also photographed its substrate. Using these two highly detailed aligned photographs, we created vector polygon models of each of the chip's physical components - about 20,000 of them in total for the 6502. These components form circuits in a few simple ways according to how they contact each other, so by intersecting our polygons, we were able to create a complete digital model and transistor-level simulation of the chip.
Labels:
algorithms,
circuits,
computing,
logic design,
machines,
photography
Wednesday, August 4, 2010
Prizes for Electric Car Algorithms
The CREATE Lab at Carnegie Mellon University is sponsoring a contest to develop efficient battery-management algorithms for electric vehicles. At the end of every month, high-performing entries will be awarded small prizes ($100 in Amazon gift cards), with a grand prize (possibly an actual electric vehicle) awarded at the end of the competition. At the end of each monthly judging phase, entries are open-sourced for all to see and share.
Tuesday, June 29, 2010
Graffiti Analysis Sculptures
Graffiti Analysis: 3D from Evan Roth on Vimeo.
Graffiti Analysis: Sculptures is a series of new physical sculptures that I am making from motion tracked graffiti data. New software (GA 3D) imports .gml files (Graffiti Markup Language) captured using Graffiti Analysis, creates 3D geometry based on the data and then exports a 3D representation of the tag as a .stl file (a common file format compatible with most 3D software packages including Blender, Maya and 3DS Max). Time is extruded in the Z dimension and pen speed is represented by the thickness of the model at any given point. I then have this data 3D printed to create a physical sculpture that serves as a data visualization of the tag. For the Street and Studio exhibition at the Kunsthalle Wein, I collaborated with an anonymous local Viennese graffiti writer and had the GA sculpture printed in ABS plastic. Graffiti motion data of his tag was captured in the streets (for the first time) at various points around Vienna.
More information (including software, source code, and many more pictures) can be found at Evan's website.
Labels:
3D modeling,
algorithms,
Art,
blinkenlights,
machine vision,
rapid prototyping
Sunday, May 16, 2010
Human Tetris
Cornell Students Adam Papamarcos and Kerran Flanagan have built an awesome set of small games using micro-controller based video processing. The details of the build (excellently documented - my students should take note) are provided at the Cornell Project Website, and more videos detailing how the system works are available at Engadget.
Labels:
algorithms,
Art,
games,
machine vision,
student projects,
video
Tuesday, April 27, 2010
AI-Controlled Mario and Level Generation
Last year, the IEEE Symposium on Computational Intelligence And Games hosted a competition where participants were asked to develop an AI that can play mario. Here's an example of a winning AI player:
This year they've added a level-generation aspect to the competition.
They also have competitions based on Ms. PacMan, StarCraft, and others.
This year they've added a level-generation aspect to the competition.
The level generation track of the competition is about creating procedural level generators for Infinite Mario Bros. Competitors will submit Java code that takes as input desired level characteristics, and output a fun level implementing these particular characteristics. The winner will be decided through live play tests.For more information, visit the 2010 Mario AI Championship page.
They also have competitions based on Ms. PacMan, StarCraft, and others.
Friday, April 23, 2010
Natural Motion for CG Characters
New algorithms from NaturalMotion allow digital characters to dynamically and realistically respond to changes in their environment. What is most interesting about this work is that the methods that they use are not hard-coded - rather than completely and painstakingly modeling the motion of walking characters by hand, they use a mixture of physics modeling and evolutionary algorithms to allow the system to 'learn' how to react to the environment. This enables the characters first to learn about walking, then dynamically adapt to perturbations like pushes and hits from objects and other characters. This results in very robust and realistic motion.
Labels:
algorithms,
Art,
artificial intelligence,
evolutionary algorithms,
games,
video
Monday, April 12, 2010
Massive Music Machine
Pat Metheny's Orchestrion.
If you've got $33, he's playing with the Orchestrion this Friday at the Mesa Arts Center. See Tour Dates for details.
If you've got $33, he's playing with the Orchestrion this Friday at the Mesa Arts Center. See Tour Dates for details.
Monday, March 29, 2010
ViBe Background Extraction
Researchers at the University of Liege in Belgium have made a breakthrough in machine vision. Background extraction is the separation of a "normal" background image from more interesting "new" pixels such as moving objects.
This new algorithm is very high performance and computationally efficient. Unfortunately, it's completely patented, but a demo video and a paper describing the method are linked below.
O. Barnich and M. Van Droogenbroeck. ViBe: a powerful random technique to estimate the background in video sequences. In International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2009), pages 945-948, April 2009. Available as a IEEE publication or on the University site.
This new algorithm is very high performance and computationally efficient. Unfortunately, it's completely patented, but a demo video and a paper describing the method are linked below.
O. Barnich and M. Van Droogenbroeck. ViBe: a powerful random technique to estimate the background in video sequences. In International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2009), pages 945-948, April 2009. Available as a IEEE publication or on the University site.
Monday, March 1, 2010
Compressed Sensing - Getting something for nothing
Wired has an interesting article on the data-manipulation techniques that could lead to real Blade-Runner / CSI style image enhancement.
I haven't seen the original paper, so I'm a little skeptical. I'd like to see what kind of error is incurred by this successive approximation. How different is the reconstructed image from the original? There are always limits to these things.
1 Undersample
A camera or other device captures only a small, randomly chosen fraction of the pixels that normally comprise a particular image. This saves time and space.
2 Fill in the dots
An algorithm called l1 minimization starts by arbitrarily picking one of the effectively infinite number of ways to fill in all the missing pixels.
3 Add shapes
The algorithm then begins to modify the picture in stages by laying colored shapes over the randomly selected image. The goal is to seek what’s called sparsity, a measure of image simplicity.
4 Add smaller shapes
The algorithm inserts the smallest number of shapes, of the simplest kind, that match the original pixels. If it sees four adjacent green pixels, it may add a green rectangle there.
5 Achieve clarity
Iteration after iteration, the algorithm adds smaller and smaller shapes, always seeking sparsity. Eventually it creates an image that will almost certainly be a near-perfect facsimile of a hi-res one.
I haven't seen the original paper, so I'm a little skeptical. I'd like to see what kind of error is incurred by this successive approximation. How different is the reconstructed image from the original? There are always limits to these things.
Subscribe to:
Posts (Atom)