banner

Software Projects

I have designed and developed quite a bit of software over the years. Listed below are the project that I designed and developed, either independently or largely on my own.

Message Database Application

My employer at the time needed a simple way for the software engineers to enter messages into the system. We were developing software for use in some industrial equipment, which needed to send messages out to technicians and administrators. The bulk of the development was done in ANSI C, and we could have simply had the engineers enter the messages using C. However, if an error was made, it would not show up until run time, and it may not show up until the message condition was created.

In order to make sure that the messages were properly created, I was chosen to write an application that would take the input from the engineers and create a properly formatted database to be linked to the system software. I then used a Microsoft Access database form to act as the interface to the engineer. This form ensured that the engineer created the message properly by restricting various fields based on what they filled in. The form would also let the engineer know how to invoke the alarm system from within their software module.

For example, if the engineer was going to send a message, which was simply text to the administrator, as opposed to an alarm, which triggered other events like alarm lights, then all the alarm specific choices would be "grayed out", or disabled.

This application was written using Visual Studio 6 and the Microsoft Foundation Class (MFC). This application would read the Access database and create various output files. This is where things got a little tricky.

Computer architectures can be either "big-endian" or "little-endian". This is determined by the way that the microprocessor accesses memory. Most microprocessors are 32 bits (4 bytes), these days. However, memory can still be accessed in one byte chunks. So when a microprocessor stores a 32 bit value (called a word) it can store it in two ways. The big-endian systems store the most significant byte in the lowest address space. The little-endian systems store the most significant byte in the highest address space.

The Windows PC, based on Intel Pentium, is little endian, while the equipment used a PowerPC processor, which was configured as big-endian. So when I was making the binary image on the x86, i had to swap the bytes around to accommodate the endian change.

Other output files were some text files that would allow the revision control software to track the changes. Revision control software generally does not handle binary data very well, so I had the application create text files with the binary data in a C like structure. This allowed developers to see the changes that were made to the database.

I also had the program create an HTML help file, which could be stored with the documentation. This would help the customers figure out alarms and messages and what to do about them. If the engineer changed the message or alarm data, the user documentation would also be automatically updated, which freed the engineer from having to update it.

Of course, all of this was pretty well documented with plenty of examples on how to use the messaging system. Every time someone found a new way to break it, I would update the documentation to help the others avoid this mistake. Creating that documentation also relieved me of having to answer the same question over and over again.

This software made it easy for the engineers to add messages and alarms correctly and quickly, allowing them to concentrate on their module, instead of the messaging system. It also allowed the engineers to easily keep track of the changes and quickly correct mistakes.

Traffic Analysis and Reliability Application

These programs were requested by the sales department for one of our customers. The customer wanted to know what the reliability and traffic capacity of the system was for various configurations. I volunteered to write the program, even though I new next to nothing about either subject.

As it turns out, we had some resident experts in reliability and traffic modeling working for us. But they were in a different business unit doing their own thing. So over many lunches, I absorbed the fundamental knowledge needed to develop the models. Much of it was statistics, and the models themselves were not too complicated. Computing reliability and traffic models for the assemblies was somewhat straightforward. Computing it for the entire system, with various assemblies, some optional and some redundant, was a bit more involved.

I started with the reliability model. We had a software package that would calculate the reliability for an assembly based on the bill of materials (BOM). It had stored in it the accepted failure rates for the discrete components. With the BOM and the parts data, it could compute the failure rate for a given assembly.

But it did not take into account the architecture of the system. Many of the components would not affect the entire assembly, but rather just a small section of it. Or it may affect the quality of the service, but the service would still be available. Because of this, the time out of service numbers tended to be greatly inflated.

So we exported all of the program's reliability information to Excel. Once in Excel I could then manipulate it using Visual Basic for Applications. I wrote a VBA program to take the BOM, reliability data, and schematic information, and create a more detailed look at the assembly.

The circuit schematics often had individual circuits listed on their own sheets. One circuit may have nothing to do with another, and so a failure on one will not affect the other. My program would group the reliability data together into sub-assemblies, and then an engineer could make a determination as to the effect of a failure of one block on the others. This gave a more detailed look at the assembly and allowed the engineer to quickly quantify the effects of a failure without having to pour over the schematic part by part.

Once an assembly's reliability was computed, the user could create a system by specifying the number and type of assemblies and their redundancy. The software would then compute the overall system reliability, based on the user's configuration. This allowed the sales team to quickly and effectively offer reliability data to the customer depending on the customer's configuration desires. This helped reassure the customer that our systems were reliable and that we had devoted the time to investigate the issue instead of simply throwing them some autogenerated numbers.

The traffic model was much simpler and was really just a function of the configuration of the system. Data did not have to be imported from other sources. It stated the traffic capacity for a system given a configuration. So when the salesperson configured a system in Excel, they could now draw on my applications to provide the customer with a good set of reliability and traffic numbers.

Manufacturing Test Application

Ener1 main product line (at the time) was small embedded computers and PC I/O products (like modems). When I was hired, they had just designed their first embedded computer and they needed a way to test it after it was manufactured. I developed a Windows program that would test all the major interfaces to make sure that the electronics and software were functioning properly. Each computer had the following interfaces, which needed to be tested - keyboard, mouse, TV out, VGA, audio input, stereo speaker output, an infrared controller, a printer port, and a modem.

The test platform involved connecting the unit under test (UUT) to the host computer using a variety of cables and interfaces. The keyboard and mouse were connected to some black boxes that converted RS-232 commands into keyboard codes and mouse movement codes. The TV output signal was connected to a video capture card on the host and the VGA connector was simply connected to a VGA monitor. The decision was made not to use a VGA->TV converter with a second video capture card in order to keep costs down. The VGA circuit was tested by an operator looking at the VGA monitor. The audio input and speaker outputs were cross connected to a sound card on the host computer. The host was connected to an IR to RS-232 converter, which converted IR signals. The parallel port was tested by connecting a zip drive to it. The zip drive also contained some custom test applications that were used on the UUT's side. The modem was connected to a telephone line simulator, which was connected to a modem on the host computer.

The test began by having the operator connect all the cables and initiate the test. The UUT would boot up and detect the Zip drive. If the zip drive were detected, the OS would load the file system and it could then run programs that were stored on it. The first thing the host computer would do is send a command through the keyboard interface to turn the screen a color and look for that color on the VGA capture card. If it appeared then the parallel port and the TV circuitry were working, and a basic communications loop had been created. The host would then send some sound to the UUT, which would store it and then send it back through the UUTs speaker interface. The host would record the sound and do a Discrete Fourier Transform and compare it with the original to make sure that the UUT did not mangle the sound. The modem was tested by commanding the UUT to dial the host and transfer some files back and forth. The mouse and IR ports were tested by accepting commands from the host. The IR port and mouse port could service keyboard commands which were routed to the command line.

When I wrote the program, I designed it to perform the tests according to a script file. This script would tell it the command to send, how to send it (keyboard, IR, mouse, etc), and what result to look for (audio in, a color on the video capture card, etc.). The nice thing about this setup is that it allowed us to easily modify the program to test other products. It easily adapted to test modems, video capture cards and sound cards which were manufactured at Ener1 as well. This lead to a common platform for testing many of the products and simplified the design of tests for new products.

Conclusion

As you can see, I have developed some fairly complicated tools and software. I don't know it all, but I can learn it all, and deliver you a solid product at a reasonable cost. Nothing is beyond my abilities for this reason.

Contact

Email Brian Rose