Category Archives: Uncategorized

6/26/2018 – Hot Rod and OPP…what happened?

Let’s get some history:  Jeremy is a long time supporter of OPP (Open Pinball Project).  He was one of the original backers of the OPP kickstarter.  He runs the Pinball Makers website and created  many of the instructions on how to build and connect together OPP cards.  I had documentation, but Jeremy had much better skills at putting together better diagrams to make populating the boards more easily understood by a person other than me.  Jeremy rocks!, and I thank him for all of the hard work that he has done over the past three or four years.

Jeremy decided to switch his pinball controllers from OPP to PROC (I really have no idea what the PROC acronym means).  I support anyone’s decision to choose the PROC because it is a very mature platform.  I think of OPP providing a service for hackers mostly.  I think of PROC as a full fledged product.  Side note:  I was talking to Gerry a couple years back, and I believe he thought OPP was stealing his customers.  I told him that I believe that OPP would eventually drive customers to his products.  I simply think this is what happened in this case.

So nobody changes their pinball controller without having a heck of a good reason.  A major change like that involves a significant amount of work for rewiring, and a lot of extra cost for buying those more expensive controllers.

I have rethemed and run two different machines using OPP controllers.  They work without issues, and my main complaint is that their boot time is too long.  Why are they different than Jeremy’s machine?  The main difference is that I use the OPP pinball framework versus using MPF (Mission Pinball Framework).  This points to the Achille’s tendon of OPP boards.  The communication to the boards are slower than all other pinball controllers out there.

Let’s give a bit of history on OPP.  It was meant to control late ’80s style pinball machines.  That was the goal.  Not ’90s style with DMDs and all sorts of bells and whistles, but the much simpler ’80s style with incandescent bulbs and such.  This was mostly based on my knowledge of how much skill I had at making a pinball machine.  I think that I looked at both Firepower 2 and Dolly Parton as a realistic goal of what I needed to be able to control.  Firepower has a lamp matrix so I used that as a goal for what OPP boards needed to support and how frequently updates were required to lamps.

After a little research on lamp matrices, each column is strobed for 2 ms, there are 8 columns, so each light bulb is updated every 16 ms.  An insert light that is “on” is really only on for 2 ms out of 16 ms, but because of persistence of vision, it appears to be on all the time.  Since OPP bulbs are on 100% of the time, there was really no reason to update lamps more quickly than once every 16 ms, and even that, doesn’t make much sense because it will look like a bulb is continuously on.

Another typical mode for a bulb is to blink either rapidly or slowly.  From previous projects, the most rapid blinking that is comfortable to me, and doesn’t just look like flickering is about 100 ms.  A slow blink is about once per second.  Since that is needed so frequently in pinball, (i.e. blink an insert rapidly or slowly) I created OPP commands so that the PSOC 4200 could blink at either fast or slow speeds automatically.  That further reduces the need for serial commands and reduces the need for updates to less than every 100 ms for each board.

The OPP framework uses the above types of commands to minimize serial communication.  How it is actually implemented is it uses a 100 ms tick and collects all updates to lamps during that tick.  At the end of 100 ms, the framework gets what the current state of the incandescent bulbs should be, and sends a single command to update them if they are different than the previously saved state.

On MPF, I tried to implement the same sort of controls (without using the blink commands since there is really no equivalent in MPF).  I thought I got pretty close, but after looking through Jeremy’s log files, I can definitely see that I didn’t achieve what I wanted.  (Since I don’t have an MPF machine, most of the coding that I do in MPF is “blind” coding.  I don’t have a good way to test to code, and really rely on others to make sure it is correct.)

MPF seems to use an individual lamp as an object.  (Object oriented wise, that makes a ton of sense).  The OPP pinball framework uses the whole card as an object.  I organized it that way to make it easier to collect updates to all lamps on a whole PSOC 4200 to minimize the number of  serial commands.  The MPF object view makes aggregation more difficult since individual lamps aren’t tied together into a single object.  I tried to get this to happen but evidently I didn’t do a good job.

One last reason to minimize turning lamps on and off quickly is flyback voltage.  Jeremy kept reporting that the incandescent boards were getting really hot from the BS-170 MOSFETs.  That confused me because my MOSFETs didn’t have that issue.  (Van Halen uses BS170s just like his machine).  What I failed to take into account is that the framework was turning the MOSFETs on and off very quickly.  Just like with solenoids, when an incandescent MOSFET is turned off,  the flyback voltage builds up.  With solenoids, an extra diode is placed across the solenoid to bleed off the voltage.  With an incandescent bulb, the current is much lower so it doesn’t build up as much flyback voltage, so the diode isn’t normally necessary.  But…if the incandescent is switched on and off rapidly, it becomes more of an issue, especially because of heat build up at the junction of semiconductor.

So here is an example from Jeremy’s log files, right about when everything goes wrong:

2018-05-23 06:30:44,861 : DEBUG : OPP : Update incand cmd: 0x20 0x13 0x07 0x00 0x0f 0x5f 0x36 0x23
2018-05-23 06:30:44,866 : DEBUG : OPP : Update incand cmd: 0x21 0x13 0x07 0xdc 0x10 0x4e 0x0e 0xd6
2018-05-23 06:30:44,871 : DEBUG : OPP : Update incand cmd: 0x21 0x13 0x07 0xdd 0x10 0x4e 0x0e 0xc0
2018-05-23 06:30:44,875 : DEBUG : OPP : Update incand cmd: 0x21 0x13 0x07 0x9d 0x10 0x4e 0x0e 0x5b
2018-05-23 06:30:44,880 : DEBUG : OPP : Update incand cmd: 0x21 0x13 0x07 0x9f 0x10 0x4e 0x0e 0x77
2018-05-23 06:30:44,884 : DEBUG : OPP : Update incand cmd: 0x21 0x13 0x07 0x9b 0x10 0x4e 0x0e 0x2f
2018-05-23 06:30:44,889 : DEBUG : OPP : Update incand cmd: 0x21 0x13 0x07 0x9b 0x10 0x5e 0x0e 0x78
2018-05-23 06:30:44,893 : DEBUG : OPP : Update incand cmd: 0x21 0x13 0x07 0x9b 0x00 0x5e 0x0e 0xda
2018-05-23 06:30:44,898 : DEBUG : OPP : Update incand cmd: 0x20 0x13 0x07 0x00 0x0f 0x5f 0x37 0x24
2018-05-23 06:30:44,906 : DEBUG : OPP : Update incand cmd: 0x23 0x13 0x07 0x00 0x00 0x00 0x3f 0xf2
2018-05-23 06:30:44,910 : DEBUG : OPP : Update incand cmd: 0x20 0x13 0x07 0x84 0x89 0xf8 0x00 0xcd
2018-05-23 06:30:44,915 : DEBUG : OPP : Update incand cmd: 0x21 0x13 0x07 0x1b 0x00 0x5e 0x0e 0xeb
2018-05-23 06:30:44,919 : DEBUG : OPP : Update incand cmd: 0x21 0x13 0x07 0x1b 0x01 0x5e 0x0e 0x80

So within 58 ms (44.919 (end timestamp) –  44.861(start timestamp)), 13 update incandescent commands are sent.  Jeremy had eight cards in his system, but they were broken into two strings of four cards each.  The first string has two PSOC4200 cards with incandescent wings.  The second string has a three PSOC4200 with incandescent wings.  To completely update all the incandescent bulbs in the system, it should only take 5 commands.  Anything more than 5 commands will not be perceived by the human eye because of persistence of vision.  Again, I believed that I had implemented this in MPF properly, but I guess I did not.  The second reason I believe this is the issue is that it is right when things failed in his machine.

What else went wrong.  Well, Jeremy reported that he was seeing CRC errors, and when I originally looked into the log files, I couldn’t find any CRC errors.  Because of that I kept emailing Jeremy asking for log files that showed the errors (and ignoring the log files that he sent me), while Jeremy was probably thinking to himself that the log files were showing the errors.  Jeremy was just reporting what he had seen before. (a CRC error looks externally the same, but is really a very different beast).  I assumed that I didn’t receive the correct log file, when I should have realized that the issue might have been described incorrectly.  (Unfortunately, it was also at a very busy time in my real job which meant that I couldn’t spend as much time on it as I usually would)

So instead of communication issues, I should have been looking into recovery from a buffer overflow.  Again, with the OPP pinball framework, the machine never gets buffer overflows because it uses a tick to insure that a buffer overflow can’t happen.  Now, knowing that MPF can easily cause buffer overflows, I need to spend more time to make sure that the OPP firmware can recover gracefully from overflows.

The above discussion may be seen as me pointing the finger at somebody else.  Far from it.  MPF provided a ton of features that Jeremy utilized.  MPF provides stock code for running the ball trough.  (That is all individually coded in the OPP pinball framework and isn’t a nice simple object.)  MPF has stock code for running EM style score reels.  (Score reels require a good amount of code to work properly, and it is definitely not something that the OPP framework would support without hand coding it.)  The OPP framework is straight python code, while MPF offers yaml to make scripting rules easier.  That helps people fearful of coding feel more comfortable.  The biggest thing MPF offers is the graphics capabilities.  Look at the Nightmare Before Christmas pinball machine and see what Mark was able to accomplish using the MPF framework.  (The OPP framework only really supports static images.  When switching songs on the jukebox in Van Halen, there is a single image for each “high-lighted” song selection so it is really just switching the background image)

Well there it is.  That is my retrospective on what I think went wrong.


1/22/2018 – Time for the reboot

So the OPP cards are all done, and not much else has been happening lately.  Yeah, maybe I could lay out another couple of cards to make one or two people happy (and that might happen at some point), but that isn’t really interesting to me.  (edit:  strangely since I wrote that sentence, I ended up doing that, and the new “plank” boards are in the repository.  I have no plans on fabricating the PCBs at the moment unless I need them.  Basically there are two types:  8 solenoids and an interface in a single plank (1019), or 16 incandescents and an interface in a single plank (1020).)  Where do we go from here?

The original goal of the Open Pinball Project was to create a complete pinball machine from scratch.  The tag line at the top of the site is “Open source pinball hardware and software – one group’s quest to build the perfect pinball machine, or at least a pinball machine for a reasonable price!”  I think that OPP is going to go back to its roots.

The goal was to build ten pinball machines.  I know that is never going to happen because I’m just not interested in doing that and only a fool would give me money.  (Well, except those kick starter people…thanks for that).  A second goal that I always had in the back of my mind was to create a completely open source pinball machine.  That is where we are going to go for the next couple of years.

What is an open source pinball machine?  The idea is that there is going to be enough information on this website and in the repository that a person can build a copy of the pinball machine from scratch.  It will contain a whole bunch of things such as:

  • Bill of Materials (BOM), so all parts can be purchased from reputable suppliers.  This is in stark contrast to all my previous projects that simply reused parts salvaged from random pinball machines.  Parts like flipper assemblies, pop bumper assemblies, side rails, lockdown bar, legs, etc.
  • GCode for using a CNC router to cut the playfield
  • Instructions for making the cabinet and backbox from scratch (may also be cut using a CNC router if that make sense)
  • Art files for playfield and cabinet graphics
  • Wiring diagrams
  • Sub-assembly diagrams for interactive toys
  • Source code to run pinball rules (based on MPF)
  • STL files for all models for 3D printing or whatever file format makes most sense
  • Templates for creating ball guides
  • Hopefully step by step instructions from start of project to end of project
  • A lot more stuff should be on this list…

Here is a quick rundown of the pinball machine:

  • Disaster! themed –  destruction from tornados, earthquakes, sinkholes, volcanoes, and maybe a giant animal if lucky
  • Wide body machine – If only making a single pinball machine, throw everything into it so there doesn’t need to be another one.  Wide body = more space.
  • LCD for displaying animations and slides
  • Original
    • playfield layout
    • artwork
    • toys
  • Standard pinball parts from an established supplier
  • Pinside thread will eventually be created to get more collaborative input
  • Single board computer for running rules
  • Future pinball simulation of playfield
  • Planning NeoPixel full RGB lighting

I fully realize I can’t design a pinball machine from scratch.  (Well not one that is going to be fun to play).  I’m hoping for the machine to be a much more collaborative project and take input from many different sources.  Because of that, it is going to take a long time.  (Since it is only a hobby, I work on it when I have time.  Right now my basement is currently running at about 48 degrees, which doesn’t make me want to spend much time down there.  In the summer when my basement is 65 degrees and the upstairs is in the high 70s, I seem to do a lot more work in the basement).

This year, the focus will be on the playfield.  The playfield design is going to be as original as possible.  (Basically I’m not going to steal a previous playfield design.  That would be the easy way out.  Because of that, it is going to take a large amount of time and many revisions.)  To that point, Joe and I have started to bat some ideas back and forth.  Currently we are throwing a playfield design back and forth using Future Pinball.  The physics in future pinball aren’t great, but we should be able to get the basic shots and geometries down.  I expect that we can throw general ideas at Future Pinball, but building a white wood is not just a good idea, but necessary.

To add more collaboration, I’m trying to sign up people in the MA/NH areas for play testing of the white wood and beyond.  That adds one more avenue for creative ideas to be discussed and proposed.  A couple of people have said they are interested including some players that are much better than I am.  Other nice thing is that it forces the design to move forward and has less chance of getting stuck at some point.  Any people that are interested, send me a note (email is on the about page at the bottom).  When I get things far enough along I will set up bi-weekly meetings to gather people.  I’m hoping to make the prototype portable enough that I can take it to various locations to get as much feedback as possible.

Goals for the pinball machine:

  • Hopefully it will be fun
  • Rules will be run by MPF.  I state this with every new pinball machine and then back out because I don’t have enough time.  Without MPF, people won’t be able to extend the rules.  I feel it is a requirement.
  • It would be great if another person would build one to verify the documentation, but since I can’t control that, oh well.
  • Boot time must be minimized.  Van Halen takes 1 or 2 minutes to boot.  Taxi takes maybe 10 or 15s.  That means that I never turn on Van Halen or SS3 if I want to play a quick game.  I realize this is inter-related with MPF, but this is very important for how I play pinball at my house.

Okay, this post has gone on for long enough.  Time to get back to it and publish this before more things happen and I have to revise it yet again.

Well, if nothing else, this pinball machine is going to be a “Disaster!”

12/23/2017 – Year end summary

It has been a busy year at the Open Pinball Project (OPP.)  I think the peak of the year was bringing two different machines to Pintastic both using OPP hardware.  Van Halen ran without flaws (except for the fact that nobody could hear the sounds because the free play room is ear splittingly loud).  SharpeShooter3 came out of its cocoon, was updated to Gen2 boards and ran well, but not perfectly.  (Maybe I should have run a couple of tests instead of just powering it on, flipping it once, and saying good to go.)    My bad, but I was so busy with the Van Halen machine, that I didn’t have the time.  I don’t believe that I will bring two machines to Pintastic again.  They barely fit in the car, and just the setup and teardown of two machines for two short days was a lot of hassle.  (Didn’t help that I had to drive back home which is a 3 hour round trip because I forgot something.  That was probably the worst part).

The Mission Pinball Framework (MPF) OPP platform got some major updates thanks mostly to Jan.  MPF should be running better than ever using his latest changes.  It addresses the fact that previously the config files had to be tuned by hand to figure out the polling frequency, and now that should occur auto-magically, just like the Spike stuff.

Next two years may be big years for OPP.  I’m hoping to announce another project at the beginning of the year if I can get everything lined up that I need to get lined up.  I have to talk to a lot of different people, and hopefully find a few others to come on board to help.  Joe has already said that he is in, and that is a big part of making the next project possible.  (I need his mechanical genius/3D printer/etc)  I need a couple local people to really make it a reality, and just haven’t run into the right people in the area.  I’m targeting one guy who is a really good player in hopes that he can provide some necessary feedback.

Wishing everybody a Merry Christmas and a good end of the year.  Keep coming up with good pinball ideas, and making them happen.


11/29/2017 – Let’s talk about Serial Ports

Serial ports still make the world go ’round whether you believe it or not.  Serial ports were created on the second or third day depending on the translation of Genesis that you are reading.  Serial ports can be fun little wires that connect one micro-controller to another micro-controller, and let’s talk a little bit about them.

Back in the good old days, every single computer in the world had one or even two RS232 ports.  Why?  So you could connect to that modem.  So you could connect two PCs together to transfer files (since it was faster than floppy disks).  You get bonus points if you have ever had to transfer data between computers using a null modem connector (pin 2 – pin 3 crossover) and just serial ports because nobody had network cards in those days.  Oh, those were the days!

Why are we talking about serial ports you may ask?  Well, because even before there was OPP hardware, there were calculations on serial bandwidth and would it be enough.  There is no reason to start a project unless you first answer a couple of feasibility sanity questions and make sure what you are trying to do makes some semblance of sense.  (This is why the propane powered snow melter for driveways never made it beyond the conceptualizing stage.  The thermodynamics for that project didn’t pass muster).

The original Gen1 cards (using an HCS08 processor) ran serial communications at 38,400 bps.  They were cute little cards that could support 16 inputs, or 8 solenoids with 8 inputs.  There was no mix and match of capabilities.  You could not make a 4 solenoid Gen1 card because, that card would have needed a different layout that I never created.  At 38,400 bps, the cards could send and receive 3,840 bytes/sec.   That doesn’t sound like much, but assuming polling every 10 ms, there were 38.4 bytes to send/receive data.  To receive data an input card, the message would be 5 bytes in length.  To receive data from a solenoid card (i.e. read the switch states), the message would be 4 bytes in length.  That means that there is enough time to service a 4 input card + 4 solenoid card system.  That would means there are 96 total inputs ((4 * 16) + (4 * 8)) for the system which seemed more than adequate to me.   I didn’t really imagine putting 32 solenoids in a system and thought that could easily be dropped to 24 solenoids or lower which gave me more breathing room on the serial port.  (Only 32 bytes per 10 ms).

So that design was assuming that I would poll the inputs every 10 ms.  10 ms seems pretty slow, right?  Wouldn’t I need to poll the inputs faster than that?  Well, how I figured it out was that I probed a pinball machine and looked at the rate of the column strobe for the switch matrix.  It is now a little simpler, and I’ll instead refer to online information.  According to the PinballRehab website, the switch matrix is strobed every 2 ms (500 times/sec), so if there are 8 columns, a read for each input is made every 16 ms.  What you say?  The processor only reads the inputs once every 16 ms or 62.5 times per second?  Sorry that is it.  Of course, the OPP Gen1 boards sampled the inputs much faster, did debouncing in software, and could detect and report on edges so even if the input wasn’t polled fast enough, it could “hold” that an input had occurred until the next read command came in.

So when Joe proposed the Gen2 boards, things changed slightly.  Since the processor moved to a Cortex-M0 based processor, the boards could now support 32 inputs on a board, or 16 solenoids + 16 inputs per board.  (Basically a doubling of the density).  The price per system dropped further because less boards needed to be purchased, wing cards could now be mixed and matched so the exact correct board could be located at the correct location, and the serial speed went up to 115,200 bps or a 3x increase in speed.  The great part is that this change in technologies did not change any of the physics of the game and how quickly inputs had to be read.  Inputs were still occurring at the same rate, so…this just gave more head room.  With Gen1 cards and the serial port, I felt I was a little too close to the edge of not having enough bandwidth.  Yeah, SharpeShooter 3 worked perfectly, but that isn’t the most full featured game.  With a 3x speed increase, I felt really good about the bandwidth on the serial port.

The Gen2 processor boards came with a USB to serial port converter chip on them.  That now eliminated the need to have an RS232 electrical level to 5V level converter.  Since some newer computers don’t have serial ports, it would have been necessary to buy a USB to serial converter and then convert the RS232 to 5V levels which seemed like too many adapters in my opinion.  (Most modern SOCs do have UARTs built into them.  This includes the RaspPi, BeagleBone, etc.)

So why am I talking about serial ports?  It basically stems from the fact that the default MPF configuration tries to request information on the serial bus too quickly.  (Maybe the default is to poll every 1 ms.  I believe Jan has reduced the default polling rate for OPP, but I’m not 100% certain)  Old versions of MPF requested information so quickly that it saturated the serial link and eventually data is lost on the serial link which causes data byte errors.  The original plan was that the link would never be fully utilized…but what if.

As with all errors, I wanted to see where the errors occurred if the link was completely saturated.  The serial communications goes from a host computer through a USB to serial converter, to a 115,200 bps serial link to the PSOC 4200, and then back again.  USB ports are significantly faster than the serial link, so I wanted to make sure that the USB was throttled properly.

For the first test, I started by setting up a quick Python serial test to send a large amount of data to USB virtual COM port and wrap it back without going to the PSOC 4200 processor.  That just tests the USB to serial port hardware and driver.  I threw down 260K of data, and examined all of the data coming back to verify it was correct.  The test insures backpressure is properly happening when going from the faster USB to slower serial port.  I also watched the output buffer, and verified that it was backing up because of the data going into the slower pipe.  That test worked 100%, which proves that the backpressure works and the buffering isn’t limited by a hardware buffer inside the USB to serial port chip.

The next test that I ran sent all of the data to the PSOC 4200, but used a card address that doesn’t match the current card.  That means that all the data simply passes through the card without processing the information.  I once again threw 260K of data, and examined the number of bytes received.  I got a .74% loss rate on the data.  Examining the data, I also found that I would lose a single byte at a time, and was not dropping large chunks of data and recovering.  That was good news because it helped prove my theory that it was the internal buffer in the PSOC 4200 that was overflowing and losing data.

So why is saturating the link so bad?  It means that as commands are sent to the card, they are buffered in internal memory for longer and longer before actually getting sent to the hardware.  A quick analogy might help.  Let’s say that you have a sink that the faucet can fill faster than it can drain.  When you start, everything is great because the extra water just gets added to the sink which is the buffer.  But as it keeps filling, eventually it overflows and gets all over the floor.  (That of course ignores the overflow drain at the top of the sink.)  With a computer, it is even worse.  The commands being put into the system are buffered in a FIFO fashion (First In First Out).  As the buffer fills up more and more, the commands take longer and longer to get to the hardware, and suddenly when you tell a solenoid to kick, it takes 1/2 second or a whole second before that solenoid finally gets that command.  As your system runs longer, commands take longer and longer to occur since the buffer keeps getting larger and larger.  That is very bad.  Your system becomes much less responsive.

When first discussing this problem we batted around the idea of adding Xon/Xoff flow control.  There are two problems with that.  First problem is that it would not stop the buffer from continuously increasing in size and making the system less responsive.  (It would actually exacerbate that problem).  Second problem is that OPP commands are binary and as such the Xon/Xoff must be “escaped” to insure that data that is the Xon character can be distinguished from the actual Xon character.  This ends up meaning the amount of data needs to increase to add these escaped sequences.  (I believe that Fast and PROC both use text based commands.  Xon/Xoff are non-printable characters, so because of that they do not need to escape the characters.  The downside of doing that is that the protocol is much less efficient on the wire.  I.e. when they send a byte of data, they send two ASCII characters to  represents the bits.  To send a byte of ones, they would send two ‘F’ characters or in binary 0x46 0x46.  OPP simply sends 0xff and is done with it.  It means that the OPP protocol can be twice as efficient on the wire.  (The down side is that it is much less human readable.  I can’t just look at a string of characters going past and easily tell what is going on because many of them are unprintable characters.)

So while doing testing, I learned a couple more things about the buffering of the Cypress USB to serial port.  In python you can ask the serial port how many bytes are waiting to be transmitted (out_waiting parameter).  When making sure that this value is 0 (i.e. the buffer is empty), there are still up to 128 bytes of data in an internal buffer in the USB to serial port chip.  That is very problematic because you can’t assume that there is an idle time on the line by polling the out_waiting parameter.  (You need to wait for out_waiting to be 0, and then wait another 10 ms for the USB to serial port chip buffer to clear).  I also found that the serial link could handle bursts of 2160 characters without losing any characters.  That shows the amount of time it takes for the slips on the serial link to build up and eventually overflow the PSoC 4200 SCB buffers.

So long story short, what is the best fix for this?  I feel adding Xon/Xoff doesn’t correct the problem, and could lead to an even less responsive system.  (And it isn’t just because I’m too lazy to add Xon/Xoff and escaping to the protocol.  It just isn’t the correct fix).  I really believe that tuning the polling rate is the correct thing to do here.  I’ll look through the OPP MPF platform and see if there is an easy way to auto-tune the rate so others don’t need all this knowledge to make the right call on how fast to poll the hardware.  I gotta get in there anyway to add OPP RGB LED support.

11/14/2017 – Slowly continuing

Lot’s of small little projects happened over the last few months.  As stated previously, I’m taking a sabbatical from large pinball projects.  I added some functionality to the MPF interface which Jan merged into the MPF project.  I can’t even remember what that functionality was at this point.  (Maybe support for switch matrices?  Yeah, I’m going to go with that.)

In the last month, I’ve been writing a Qt application to optimize picking groups during pinball league nights.  I wrote up a whole article on why the method is better than what is currently out there including academic papers on the math behind it.  I then went through the math to figure out the margin of error of determining the best player.  Yeah, I’m nearly certain only Bowen would have been able to follow it, or all those statistics heads that read the blog (well, I doubt there are really any statistics majors reading this blog).  I was using the New England Pinball League (NEPL) attendance at Pinball Wizard Arcade (PWA) to see if the new method really made a difference.  After doing the math, it became evident that enough games weren’t being played during the season guarantee that the best players were placed in A division.  The method did improve the quality of the player that was placed in division A, but with the limited number of meets, it couldn’t guarantee that the absolute best players were placed in division A.

An easy way to understand this would be to think about the worst case scenarios.  Let’s say you have a league with 80 players and they attend every meet.  (That makes it easy because it is 20 groups of 4 players each).  Let’s say that 40 of the players are the Bowens of the world (excellent players), and 40 of the players are the Hughs of the world (sucky players).  Even though it is unlikely, when randomly assigning groups, all the Bowens could always be placed with other Bowens.  When the groups are split into A, B, C and D divisions, all divisions would be made up of half Bowens and half Hughs.

So the algorithm I worked on reduced the number of times that a single player will replay another player within a season.  It makes the above situation less likely, but it does not eliminate the possibility.  The only thing that can really eliminate the possibility is to insure that there are at least 20 meets in the season to guarantee that everybody gets to play everybody else.  As per the current NEPL rules, only the first 5 weeks determine which division you will be placed in.  So at the most, you will only play against 15 different opponents.  (With randomly assigning groups, it can be fewer players than this).

Just to state that more succinctly, if there are more than 16 people playing at a location, even with the best group picking algorithm in the world (but not relying on past wins/losses), the break down of the divisions cannot quantitatively put the correct people into the correct divisions.

The algorithm that I created reduces affects of the randomization, but it can’t overcome it because of the limited number of meets.  So basically, the algorithm is better, but there is no way that it can be perfect.

I guess it is like baseball to me.  I hate the whole idea that an umpire can call pitches, and “open up” or “restrict” the strike zone.  It drives me insane.  A computer can do a much better job, and there wouldn’t be the fear of umpire bias, or bad calls.  My wife likes the “human” aspect of the umpire calling strikes and balls, but I say, bah, humbug.

Anyway, the Qt project is now up in the repository.  With all of the above issues, I don’t know whether it is really valuable, but it was fun to write.

9/29/2017 – Open source is open source

This has come up again, and when people start yelling about others, it gets me a little angry.  Think of this blog post as another one of those where I’m just complaining.  It is only tangentially related to pinball, but it irks me so here we go.

I’m a big proponent of open source.  Why?  Because open source tools have enabled me to create a bunch of great things.  Things that would not have been possible without open source.  Let’s go over the tools that I use most days in my life:  gcc, linux, python, eclipse,  and millions of others.  That list doesn’t include many subprojects like kicad, gimp, audacity, etc.  That fails to mention open standards such as c, c++, IP, UDP which makes our world go around.  I guess that I’m not just talking about open source, but transparency and open standards.

Everything that I have produced for the Open Pinball Project (OPP) is under the GPLv3 license.   Why is that license important to me, and why not simply release the stuff as un-copyrighted?  An un-copyrighted work can be used by everyone (That’s good).  Here is the important distinction.  If you base your project off a GPLv3 project, it also most be released into the public domain.  This is referred to as copyleft.  Anything that is derived from the original GPLv3 copyrighted material must also be available for others derive further works.

So anyone can take anything in the OPP repository and use it for their own purposes.  The only stipulation that I really care about, is that if they do that, and make improvements, those improvements most also be available publicly so others can easily get the improvements and new works.  That ends up helping to increase the quality of the software or hardware in the future.

So why did I release it with that particular license?  OPP is months of hard work.  Maybe even years.  I have no idea how much time I have spent on it over the last 10 or 15 years.  I want others to be able to leverage my work so that they can learn from it, extend it, and generally use anything that they find useful.

I take copyrights very seriously.  Here is a strange fact that most people don’t know.  If you use the OPP pinball framework, sound can be either place in the sounds folder, or the sounds/copyright subfolder.  If it is in the copyright subfolder, it should not be saved to the repository and should simply exist on the physical machine.  For the Van Halen pinball machine, there are many Van Halen/Hagar songs that are played in the background.  That’s all copyrighted material, so I don’t have the rights to put that in the repository.  If the framework can’t find the sound file, and it is supposed to be in the copyrighted folder, it plays a standard sound clip stating “that is copyrighted”.

At one point a few years back, people were complaining that others were using their work in ways that it wasn’t meant.  I believe MPF (mission pinball framework) was originally branched from PROC work that Gerry, Mike, and some others did.  I think this was in regards to MPF supporting multiple pieces of hardware.  Don’t complain about that.  Feel honored that your hard work is getting new life and others are benefiting from using it.  That is the point of open source.  It allows others to use your sweat and tears and make something better than what you ever imagined.

People aren’t “stealing” your work…they are extending it.  Imitation is the highest form of flattery.  If you don’t want others to use your work, don’t release it as open source software/hardware/whatever.  Keep it closed and locked away, and I really do feel that your stuff will suffer when an open source solution comes along.  The reason that so many people use the PROC platform is because it is relatively open, and it has a well defined interface.  It has a skeleton framework that others can use to create their own projects.

So why did I get on this rant?  Strangely it is because of Ben Heck.  Ben Heck and Mike from HomePin are having a little tiff.  The PinHeck system I believe was released under the Creative Commons – Share Alike (CC-SA) license.  (That holds many of the same copyleft attributes as GPLv3).  It seems that Mike, or people working for Mike, may have based some of his designs on the PinHeck system.

So a couple quick points.  Under that license you must attribute the original design in some way.  Yada, yada, this is based on such and such.  Regardless of the license, that is the right thing to do.  If that is what happened, just fess up Mike and say it.  There is nothing illegal and lock stock and barrel copying that design and using it for your own purpose because it was released under CC-SA.  Any fixes that he made to the design, he must publish them because it is a copyleft license.

So, I assume when Ben caught wind of this, he pulled the files from the server.  I haven’t looked lately if they are there.  The CC-SA license is irrevocable.  You can’t simply say, sorry, I now don’t want it to be open to the public.  Once open source, it is always open source.  The legal ramifications would be impossible to reconcile since others by design could have based their works on your work.

Pulling things off the server is absurd.  Once on the internet, it is always on the internet.  There is a project called the internet archive that lets you go back in time and find things that people have posted on the internet and then removed.  Even though Ben has removed the content, it is still out there.  I have all the design/art files that Ben posted for America’s Most Haunted (AMH).  They were put out there so others could extend his work and make mods and toppers.  They could even build another AMH if they wanted to spend the time.  They are still readily available on the internet archive.

So that gets me to the last point.  Ben mentioned Charlie caught wind of someone trying to build another AMH.  Maybe it was me, maybe it was somebody else.  The truth is I actually considered it.  Charlie, Ben, you can rest assured, that I have no desire to make an AMH from scratch.  That being said, it would be a technically interesting project, but I don’t feel it is worth the time or effort.

I’m still hoping for somebody to create a completely open source pinball machine from scratch.  Maybe in the next 10 years.

If anybody has more details on the origins of MPF, why Ben pulled the art files, etc, that is more correct, I will be happy to fix the above post or post corrections.  Most of this is third or forth hand information that I’ve gleaned from the internet so it is not very reliable.

9/21/2017 – Updated NeoPixel Library

This is the first step in a multi step process to update how NeoPixels are dealt with in the OPP framework.  This is going to be pretty techy centric, so I apologize in advance.  (Truth be told, I really have no idea who reads this blog, so maybe those two people like techy topics.  I just don’t know).

The original implementation of lighting NeoPixels using the PSoC used a SPI bus to create the protocol.  A single bit sent to a NeoPixel is a waveform which is high for 417ns, low or high depending on if the bit is 1 or 0 for the next 417ns, and low for the last 417ns.  If you add these 3 portions of the waveform together, a single bit takes 1.25 us to send.  The SPI bus is one of the simplest buses.  To send a 1, it sends a high for a clock cycle.  To send a 0, it sends a low for a clock cycle.  If I set the clock of the SPI so a full clock cycle is 417ns, I can have the hardware send a bit stream at the correct rate.  There is one problem.  Instead of sending a single bit, I have to send the framing portion of the protocol, so I end up sending 1×0.  (If I want to send a 1, I send 110.  If I want to send a 0, I send 100.  Simple).  Here’s the problem.  Making 3 bits out of 1 bit kind of stinks because it keeps crossing byte boundaries and such.  Really quite annoying.

One way to reduce this annoyance is to just store everything as 3 bits in RAM.  That means that each Neopixel requires 9 bytes (24 bits/Neopixel * 3 = 72 bits or 9 bytes).  As mentioned in the last post, the PSoC 4200 only has 4K of RAM, so that disappears pretty fast.

Another way is to generate the framing on the fly.  So every time that a bit is sent to a NeoPixel it is pre-pended with a 1 and post-pended with a 0.  This takes much more processing time, but saves RAM.  (This was the original implementation, and the code that contains all the bit shifts and boundary checking is magnificent.  Completely unreadable).

Neither of these solutions are optimal.  (Heck, they all really kind of stink).  Enter the little PLD that is part of the PSoC 4200.  Why not create a little state machine to pull bytes from a FIFO and create the framing in hardware.  That way, the processor doesn’t have to do any of the bit banging stuff.  As an added bonus, the PSoC has internal FIFOs that can be set for either 24 bits (RGB Neopixels) or 32 bits (RGBW Neopixels).  In that way, the processor just has to keep the FIFO from under-flowing and can use a single write to send a complete NeoPixel update value.  Very clean.

At this point, I’ve created a library that does all the NeoPixel heavy lifting with the state machine included and an interrupt to keep the FIFO from under-flowing.  In the OPP code, I will simply pre-allocate 3 bytes for every NeoPixel in memory, then as MPF or whatever framework wants to update the value of a NeoPixel, it just has to send the index, and the new value.  At this point I’ve just finished the library, and with the previous NeoPixel incarnation, I had already set up commands to change NeoPixel values.  I might update those commands to be less restrictive.

Using this library, the load on the processor should be as small as possible using the PSoC4.  Only thing that would be better is to buy a PSoC5 and set up a DMA to fill the FIFO.  That would mean the updates could happen without any processor intervention at all except for starting the DMA.

Here is a link to the Cypress question, and the final library solution that I posted.  Since I’m going to integrate it into the OPP firmware, there is probably little reason to click this link, but it does have the library.  Cypress Developer Community Link