1/22/2018 – Time for the reboot

So the OPP cards are all done, and not much else has been happening lately.  Yeah, maybe I could lay out another couple of cards to make one or two people happy (and that might happen at some point), but that isn’t really interesting to me.  (edit:  strangely since I wrote that sentence, I ended up doing that, and the new “plank” boards are in the repository.  I have no plans on fabricating the PCBs at the moment unless I need them.  Basically there are two types:  8 solenoids and an interface in a single plank (1019), or 16 incandescents and an interface in a single plank (1020).)  Where do we go from here?

The original goal of the Open Pinball Project was to create a complete pinball machine from scratch.  The tag line at the top of the site is “Open source pinball hardware and software – one group’s quest to build the perfect pinball machine, or at least a pinball machine for a reasonable price!”  I think that OPP is going to go back to its roots.

The goal was to build ten pinball machines.  I know that is never going to happen because I’m just not interested in doing that and only a fool would give me money.  (Well, except those kick starter people…thanks for that).  A second goal that I always had in the back of my mind was to create a completely open source pinball machine.  That is where we are going to go for the next couple of years.

What is an open source pinball machine?  The idea is that there is going to be enough information on this website and in the repository that a person can build a copy of the pinball machine from scratch.  It will contain a whole bunch of things such as:

  • Bill of Materials (BOM), so all parts can be purchased from reputable suppliers.  This is in stark contrast to all my previous projects that simply reused parts salvaged from random pinball machines.  Parts like flipper assemblies, pop bumper assemblies, side rails, lockdown bar, legs, etc.
  • GCode for using a CNC router to cut the playfield
  • Instructions for making the cabinet and backbox from scratch (may also be cut using a CNC router if that make sense)
  • Art files for playfield and cabinet graphics
  • Wiring diagrams
  • Sub-assembly diagrams for interactive toys
  • Source code to run pinball rules (based on MPF)
  • STL files for all models for 3D printing or whatever file format makes most sense
  • Templates for creating ball guides
  • Hopefully step by step instructions from start of project to end of project
  • A lot more stuff should be on this list…

Here is a quick rundown of the pinball machine:

  • Disaster! themed –  destruction from tornados, earthquakes, sinkholes, volcanoes, and maybe a giant animal if lucky
  • Wide body machine – If only making a single pinball machine, throw everything into it so there doesn’t need to be another one.  Wide body = more space.
  • LCD for displaying animations and slides
  • Original
    • playfield layout
    • artwork
    • toys
  • Standard pinball parts from an established supplier
  • Pinside thread will eventually be created to get more collaborative input
  • Single board computer for running rules
  • Future pinball simulation of playfield
  • Planning NeoPixel full RGB lighting

I fully realize I can’t design a pinball machine from scratch.  (Well not one that is going to be fun to play).  I’m hoping for the machine to be a much more collaborative project and take input from many different sources.  Because of that, it is going to take a long time.  (Since it is only a hobby, I work on it when I have time.  Right now my basement is currently running at about 48 degrees, which doesn’t make me want to spend much time down there.  In the summer when my basement is 65 degrees and the upstairs is in the high 70s, I seem to do a lot more work in the basement).

This year, the focus will be on the playfield.  The playfield design is going to be as original as possible.  (Basically I’m not going to steal a previous playfield design.  That would be the easy way out.  Because of that, it is going to take a large amount of time and many revisions.)  To that point, Joe and I have started to bat some ideas back and forth.  Currently we are throwing a playfield design back and forth using Future Pinball.  The physics in future pinball aren’t great, but we should be able to get the basic shots and geometries down.  I expect that we can throw general ideas at Future Pinball, but building a white wood is not just a good idea, but necessary.

To add more collaboration, I’m trying to sign up people in the MA/NH areas for play testing of the white wood and beyond.  That adds one more avenue for creative ideas to be discussed and proposed.  A couple of people have said they are interested including some players that are much better than I am.  Other nice thing is that it forces the design to move forward and has less chance of getting stuck at some point.  Any people that are interested, send me a note (email is on the about page at the bottom).  When I get things far enough along I will set up bi-weekly meetings to gather people.  I’m hoping to make the prototype portable enough that I can take it to various locations to get as much feedback as possible.

Goals for the pinball machine:

  • Hopefully it will be fun
  • Rules will be run by MPF.  I state this with every new pinball machine and then back out because I don’t have enough time.  Without MPF, people won’t be able to extend the rules.  I feel it is a requirement.
  • It would be great if another person would build one to verify the documentation, but since I can’t control that, oh well.
  • Boot time must be minimized.  Van Halen takes 1 or 2 minutes to boot.  Taxi takes maybe 10 or 15s.  That means that I never turn on Van Halen or SS3 if I want to play a quick game.  I realize this is inter-related with MPF, but this is very important for how I play pinball at my house.

Okay, this post has gone on for long enough.  Time to get back to it and publish this before more things happen and I have to revise it yet again.

Well, if nothing else, this pinball machine is going to be a “Disaster!”


12/23/2017 – Year end summary

It has been a busy year at the Open Pinball Project (OPP.)  I think the peak of the year was bringing two different machines to Pintastic both using OPP hardware.  Van Halen ran without flaws (except for the fact that nobody could hear the sounds because the free play room is ear splittingly loud).  SharpeShooter3 came out of its cocoon, was updated to Gen2 boards and ran well, but not perfectly.  (Maybe I should have run a couple of tests instead of just powering it on, flipping it once, and saying good to go.)    My bad, but I was so busy with the Van Halen machine, that I didn’t have the time.  I don’t believe that I will bring two machines to Pintastic again.  They barely fit in the car, and just the setup and teardown of two machines for two short days was a lot of hassle.  (Didn’t help that I had to drive back home which is a 3 hour round trip because I forgot something.  That was probably the worst part).

The Mission Pinball Framework (MPF) OPP platform got some major updates thanks mostly to Jan.  MPF should be running better than ever using his latest changes.  It addresses the fact that previously the config files had to be tuned by hand to figure out the polling frequency, and now that should occur auto-magically, just like the Spike stuff.

Next two years may be big years for OPP.  I’m hoping to announce another project at the beginning of the year if I can get everything lined up that I need to get lined up.  I have to talk to a lot of different people, and hopefully find a few others to come on board to help.  Joe has already said that he is in, and that is a big part of making the next project possible.  (I need his mechanical genius/3D printer/etc)  I need a couple local people to really make it a reality, and just haven’t run into the right people in the area.  I’m targeting one guy who is a really good player in hopes that he can provide some necessary feedback.

Wishing everybody a Merry Christmas and a good end of the year.  Keep coming up with good pinball ideas, and making them happen.


11/29/2017 – Let’s talk about Serial Ports

Serial ports still make the world go ’round whether you believe it or not.  Serial ports were created on the second or third day depending on the translation of Genesis that you are reading.  Serial ports can be fun little wires that connect one micro-controller to another micro-controller, and let’s talk a little bit about them.

Back in the good old days, every single computer in the world had one or even two RS232 ports.  Why?  So you could connect to that modem.  So you could connect two PCs together to transfer files (since it was faster than floppy disks).  You get bonus points if you have ever had to transfer data between computers using a null modem connector (pin 2 – pin 3 crossover) and just serial ports because nobody had network cards in those days.  Oh, those were the days!

Why are we talking about serial ports you may ask?  Well, because even before there was OPP hardware, there were calculations on serial bandwidth and would it be enough.  There is no reason to start a project unless you first answer a couple of feasibility sanity questions and make sure what you are trying to do makes some semblance of sense.  (This is why the propane powered snow melter for driveways never made it beyond the conceptualizing stage.  The thermodynamics for that project didn’t pass muster).

The original Gen1 cards (using an HCS08 processor) ran serial communications at 38,400 bps.  They were cute little cards that could support 16 inputs, or 8 solenoids with 8 inputs.  There was no mix and match of capabilities.  You could not make a 4 solenoid Gen1 card because, that card would have needed a different layout that I never created.  At 38,400 bps, the cards could send and receive 3,840 bytes/sec.   That doesn’t sound like much, but assuming polling every 10 ms, there were 38.4 bytes to send/receive data.  To receive data an input card, the message would be 5 bytes in length.  To receive data from a solenoid card (i.e. read the switch states), the message would be 4 bytes in length.  That means that there is enough time to service a 4 input card + 4 solenoid card system.  That would means there are 96 total inputs ((4 * 16) + (4 * 8)) for the system which seemed more than adequate to me.   I didn’t really imagine putting 32 solenoids in a system and thought that could easily be dropped to 24 solenoids or lower which gave me more breathing room on the serial port.  (Only 32 bytes per 10 ms).

So that design was assuming that I would poll the inputs every 10 ms.  10 ms seems pretty slow, right?  Wouldn’t I need to poll the inputs faster than that?  Well, how I figured it out was that I probed a pinball machine and looked at the rate of the column strobe for the switch matrix.  It is now a little simpler, and I’ll instead refer to online information.  According to the PinballRehab website, the switch matrix is strobed every 2 ms (500 times/sec), so if there are 8 columns, a read for each input is made every 16 ms.  What you say?  The processor only reads the inputs once every 16 ms or 62.5 times per second?  Sorry that is it.  Of course, the OPP Gen1 boards sampled the inputs much faster, did debouncing in software, and could detect and report on edges so even if the input wasn’t polled fast enough, it could “hold” that an input had occurred until the next read command came in.

So when Joe proposed the Gen2 boards, things changed slightly.  Since the processor moved to a Cortex-M0 based processor, the boards could now support 32 inputs on a board, or 16 solenoids + 16 inputs per board.  (Basically a doubling of the density).  The price per system dropped further because less boards needed to be purchased, wing cards could now be mixed and matched so the exact correct board could be located at the correct location, and the serial speed went up to 115,200 bps or a 3x increase in speed.  The great part is that this change in technologies did not change any of the physics of the game and how quickly inputs had to be read.  Inputs were still occurring at the same rate, so…this just gave more head room.  With Gen1 cards and the serial port, I felt I was a little too close to the edge of not having enough bandwidth.  Yeah, SharpeShooter 3 worked perfectly, but that isn’t the most full featured game.  With a 3x speed increase, I felt really good about the bandwidth on the serial port.

The Gen2 processor boards came with a USB to serial port converter chip on them.  That now eliminated the need to have an RS232 electrical level to 5V level converter.  Since some newer computers don’t have serial ports, it would have been necessary to buy a USB to serial converter and then convert the RS232 to 5V levels which seemed like too many adapters in my opinion.  (Most modern SOCs do have UARTs built into them.  This includes the RaspPi, BeagleBone, etc.)

So why am I talking about serial ports?  It basically stems from the fact that the default MPF configuration tries to request information on the serial bus too quickly.  (Maybe the default is to poll every 1 ms.  I believe Jan has reduced the default polling rate for OPP, but I’m not 100% certain)  Old versions of MPF requested information so quickly that it saturated the serial link and eventually data is lost on the serial link which causes data byte errors.  The original plan was that the link would never be fully utilized…but what if.

As with all errors, I wanted to see where the errors occurred if the link was completely saturated.  The serial communications goes from a host computer through a USB to serial converter, to a 115,200 bps serial link to the PSOC 4200, and then back again.  USB ports are significantly faster than the serial link, so I wanted to make sure that the USB was throttled properly.

For the first test, I started by setting up a quick Python serial test to send a large amount of data to USB virtual COM port and wrap it back without going to the PSOC 4200 processor.  That just tests the USB to serial port hardware and driver.  I threw down 260K of data, and examined all of the data coming back to verify it was correct.  The test insures backpressure is properly happening when going from the faster USB to slower serial port.  I also watched the output buffer, and verified that it was backing up because of the data going into the slower pipe.  That test worked 100%, which proves that the backpressure works and the buffering isn’t limited by a hardware buffer inside the USB to serial port chip.

The next test that I ran sent all of the data to the PSOC 4200, but used a card address that doesn’t match the current card.  That means that all the data simply passes through the card without processing the information.  I once again threw 260K of data, and examined the number of bytes received.  I got a .74% loss rate on the data.  Examining the data, I also found that I would lose a single byte at a time, and was not dropping large chunks of data and recovering.  That was good news because it helped prove my theory that it was the internal buffer in the PSOC 4200 that was overflowing and losing data.

So why is saturating the link so bad?  It means that as commands are sent to the card, they are buffered in internal memory for longer and longer before actually getting sent to the hardware.  A quick analogy might help.  Let’s say that you have a sink that the faucet can fill faster than it can drain.  When you start, everything is great because the extra water just gets added to the sink which is the buffer.  But as it keeps filling, eventually it overflows and gets all over the floor.  (That of course ignores the overflow drain at the top of the sink.)  With a computer, it is even worse.  The commands being put into the system are buffered in a FIFO fashion (First In First Out).  As the buffer fills up more and more, the commands take longer and longer to get to the hardware, and suddenly when you tell a solenoid to kick, it takes 1/2 second or a whole second before that solenoid finally gets that command.  As your system runs longer, commands take longer and longer to occur since the buffer keeps getting larger and larger.  That is very bad.  Your system becomes much less responsive.

When first discussing this problem we batted around the idea of adding Xon/Xoff flow control.  There are two problems with that.  First problem is that it would not stop the buffer from continuously increasing in size and making the system less responsive.  (It would actually exacerbate that problem).  Second problem is that OPP commands are binary and as such the Xon/Xoff must be “escaped” to insure that data that is the Xon character can be distinguished from the actual Xon character.  This ends up meaning the amount of data needs to increase to add these escaped sequences.  (I believe that Fast and PROC both use text based commands.  Xon/Xoff are non-printable characters, so because of that they do not need to escape the characters.  The downside of doing that is that the protocol is much less efficient on the wire.  I.e. when they send a byte of data, they send two ASCII characters to  represents the bits.  To send a byte of ones, they would send two ‘F’ characters or in binary 0x46 0x46.  OPP simply sends 0xff and is done with it.  It means that the OPP protocol can be twice as efficient on the wire.  (The down side is that it is much less human readable.  I can’t just look at a string of characters going past and easily tell what is going on because many of them are unprintable characters.)

So while doing testing, I learned a couple more things about the buffering of the Cypress USB to serial port.  In python you can ask the serial port how many bytes are waiting to be transmitted (out_waiting parameter).  When making sure that this value is 0 (i.e. the buffer is empty), there are still up to 128 bytes of data in an internal buffer in the USB to serial port chip.  That is very problematic because you can’t assume that there is an idle time on the line by polling the out_waiting parameter.  (You need to wait for out_waiting to be 0, and then wait another 10 ms for the USB to serial port chip buffer to clear).  I also found that the serial link could handle bursts of 2160 characters without losing any characters.  That shows the amount of time it takes for the slips on the serial link to build up and eventually overflow the PSoC 4200 SCB buffers.

So long story short, what is the best fix for this?  I feel adding Xon/Xoff doesn’t correct the problem, and could lead to an even less responsive system.  (And it isn’t just because I’m too lazy to add Xon/Xoff and escaping to the protocol.  It just isn’t the correct fix).  I really believe that tuning the polling rate is the correct thing to do here.  I’ll look through the OPP MPF platform and see if there is an easy way to auto-tune the rate so others don’t need all this knowledge to make the right call on how fast to poll the hardware.  I gotta get in there anyway to add OPP RGB LED support.

11/14/2017 – Slowly continuing

Lot’s of small little projects happened over the last few months.  As stated previously, I’m taking a sabbatical from large pinball projects.  I added some functionality to the MPF interface which Jan merged into the MPF project.  I can’t even remember what that functionality was at this point.  (Maybe support for switch matrices?  Yeah, I’m going to go with that.)

In the last month, I’ve been writing a Qt application to optimize picking groups during pinball league nights.  I wrote up a whole article on why the method is better than what is currently out there including academic papers on the math behind it.  I then went through the math to figure out the margin of error of determining the best player.  Yeah, I’m nearly certain only Bowen would have been able to follow it, or all those statistics heads that read the blog (well, I doubt there are really any statistics majors reading this blog).  I was using the New England Pinball League (NEPL) attendance at Pinball Wizard Arcade (PWA) to see if the new method really made a difference.  After doing the math, it became evident that enough games weren’t being played during the season guarantee that the best players were placed in A division.  The method did improve the quality of the player that was placed in division A, but with the limited number of meets, it couldn’t guarantee that the absolute best players were placed in division A.

An easy way to understand this would be to think about the worst case scenarios.  Let’s say you have a league with 80 players and they attend every meet.  (That makes it easy because it is 20 groups of 4 players each).  Let’s say that 40 of the players are the Bowens of the world (excellent players), and 40 of the players are the Hughs of the world (sucky players).  Even though it is unlikely, when randomly assigning groups, all the Bowens could always be placed with other Bowens.  When the groups are split into A, B, C and D divisions, all divisions would be made up of half Bowens and half Hughs.

So the algorithm I worked on reduced the number of times that a single player will replay another player within a season.  It makes the above situation less likely, but it does not eliminate the possibility.  The only thing that can really eliminate the possibility is to insure that there are at least 20 meets in the season to guarantee that everybody gets to play everybody else.  As per the current NEPL rules, only the first 5 weeks determine which division you will be placed in.  So at the most, you will only play against 15 different opponents.  (With randomly assigning groups, it can be fewer players than this).

Just to state that more succinctly, if there are more than 16 people playing at a location, even with the best group picking algorithm in the world (but not relying on past wins/losses), the break down of the divisions cannot quantitatively put the correct people into the correct divisions.

The algorithm that I created reduces affects of the randomization, but it can’t overcome it because of the limited number of meets.  So basically, the algorithm is better, but there is no way that it can be perfect.

I guess it is like baseball to me.  I hate the whole idea that an umpire can call pitches, and “open up” or “restrict” the strike zone.  It drives me insane.  A computer can do a much better job, and there wouldn’t be the fear of umpire bias, or bad calls.  My wife likes the “human” aspect of the umpire calling strikes and balls, but I say, bah, humbug.

Anyway, the Qt project is now up in the repository.  With all of the above issues, I don’t know whether it is really valuable, but it was fun to write.

9/29/2017 – Open source is open source

This has come up again, and when people start yelling about others, it gets me a little angry.  Think of this blog post as another one of those where I’m just complaining.  It is only tangentially related to pinball, but it irks me so here we go.

I’m a big proponent of open source.  Why?  Because open source tools have enabled me to create a bunch of great things.  Things that would not have been possible without open source.  Let’s go over the tools that I use most days in my life:  gcc, linux, python, eclipse,  and millions of others.  That list doesn’t include many subprojects like kicad, gimp, audacity, etc.  That fails to mention open standards such as c, c++, IP, UDP which makes our world go around.  I guess that I’m not just talking about open source, but transparency and open standards.

Everything that I have produced for the Open Pinball Project (OPP) is under the GPLv3 license.   Why is that license important to me, and why not simply release the stuff as un-copyrighted?  An un-copyrighted work can be used by everyone (That’s good).  Here is the important distinction.  If you base your project off a GPLv3 project, it also most be released into the public domain.  This is referred to as copyleft.  Anything that is derived from the original GPLv3 copyrighted material must also be available for others derive further works.

So anyone can take anything in the OPP repository and use it for their own purposes.  The only stipulation that I really care about, is that if they do that, and make improvements, those improvements most also be available publicly so others can easily get the improvements and new works.  That ends up helping to increase the quality of the software or hardware in the future.

So why did I release it with that particular license?  OPP is months of hard work.  Maybe even years.  I have no idea how much time I have spent on it over the last 10 or 15 years.  I want others to be able to leverage my work so that they can learn from it, extend it, and generally use anything that they find useful.

I take copyrights very seriously.  Here is a strange fact that most people don’t know.  If you use the OPP pinball framework, sound can be either place in the sounds folder, or the sounds/copyright subfolder.  If it is in the copyright subfolder, it should not be saved to the repository and should simply exist on the physical machine.  For the Van Halen pinball machine, there are many Van Halen/Hagar songs that are played in the background.  That’s all copyrighted material, so I don’t have the rights to put that in the repository.  If the framework can’t find the sound file, and it is supposed to be in the copyrighted folder, it plays a standard sound clip stating “that is copyrighted”.

At one point a few years back, people were complaining that others were using their work in ways that it wasn’t meant.  I believe MPF (mission pinball framework) was originally branched from PROC work that Gerry, Mike, and some others did.  I think this was in regards to MPF supporting multiple pieces of hardware.  Don’t complain about that.  Feel honored that your hard work is getting new life and others are benefiting from using it.  That is the point of open source.  It allows others to use your sweat and tears and make something better than what you ever imagined.

People aren’t “stealing” your work…they are extending it.  Imitation is the highest form of flattery.  If you don’t want others to use your work, don’t release it as open source software/hardware/whatever.  Keep it closed and locked away, and I really do feel that your stuff will suffer when an open source solution comes along.  The reason that so many people use the PROC platform is because it is relatively open, and it has a well defined interface.  It has a skeleton framework that others can use to create their own projects.

So why did I get on this rant?  Strangely it is because of Ben Heck.  Ben Heck and Mike from HomePin are having a little tiff.  The PinHeck system I believe was released under the Creative Commons – Share Alike (CC-SA) license.  (That holds many of the same copyleft attributes as GPLv3).  It seems that Mike, or people working for Mike, may have based some of his designs on the PinHeck system.

So a couple quick points.  Under that license you must attribute the original design in some way.  Yada, yada, this is based on such and such.  Regardless of the license, that is the right thing to do.  If that is what happened, just fess up Mike and say it.  There is nothing illegal and lock stock and barrel copying that design and using it for your own purpose because it was released under CC-SA.  Any fixes that he made to the design, he must publish them because it is a copyleft license.

So, I assume when Ben caught wind of this, he pulled the files from the server.  I haven’t looked lately if they are there.  The CC-SA license is irrevocable.  You can’t simply say, sorry, I now don’t want it to be open to the public.  Once open source, it is always open source.  The legal ramifications would be impossible to reconcile since others by design could have based their works on your work.

Pulling things off the server is absurd.  Once on the internet, it is always on the internet.  There is a project called the internet archive that lets you go back in time and find things that people have posted on the internet and then removed.  Even though Ben has removed the content, it is still out there.  I have all the design/art files that Ben posted for America’s Most Haunted (AMH).  They were put out there so others could extend his work and make mods and toppers.  They could even build another AMH if they wanted to spend the time.  They are still readily available on the internet archive.

So that gets me to the last point.  Ben mentioned Charlie caught wind of someone trying to build another AMH.  Maybe it was me, maybe it was somebody else.  The truth is I actually considered it.  Charlie, Ben, you can rest assured, that I have no desire to make an AMH from scratch.  That being said, it would be a technically interesting project, but I don’t feel it is worth the time or effort.

I’m still hoping for somebody to create a completely open source pinball machine from scratch.  Maybe in the next 10 years.

If anybody has more details on the origins of MPF, why Ben pulled the art files, etc, that is more correct, I will be happy to fix the above post or post corrections.  Most of this is third or forth hand information that I’ve gleaned from the internet so it is not very reliable.

9/21/2017 – Updated NeoPixel Library

This is the first step in a multi step process to update how NeoPixels are dealt with in the OPP framework.  This is going to be pretty techy centric, so I apologize in advance.  (Truth be told, I really have no idea who reads this blog, so maybe those two people like techy topics.  I just don’t know).

The original implementation of lighting NeoPixels using the PSoC used a SPI bus to create the protocol.  A single bit sent to a NeoPixel is a waveform which is high for 417ns, low or high depending on if the bit is 1 or 0 for the next 417ns, and low for the last 417ns.  If you add these 3 portions of the waveform together, a single bit takes 1.25 us to send.  The SPI bus is one of the simplest buses.  To send a 1, it sends a high for a clock cycle.  To send a 0, it sends a low for a clock cycle.  If I set the clock of the SPI so a full clock cycle is 417ns, I can have the hardware send a bit stream at the correct rate.  There is one problem.  Instead of sending a single bit, I have to send the framing portion of the protocol, so I end up sending 1×0.  (If I want to send a 1, I send 110.  If I want to send a 0, I send 100.  Simple).  Here’s the problem.  Making 3 bits out of 1 bit kind of stinks because it keeps crossing byte boundaries and such.  Really quite annoying.

One way to reduce this annoyance is to just store everything as 3 bits in RAM.  That means that each Neopixel requires 9 bytes (24 bits/Neopixel * 3 = 72 bits or 9 bytes).  As mentioned in the last post, the PSoC 4200 only has 4K of RAM, so that disappears pretty fast.

Another way is to generate the framing on the fly.  So every time that a bit is sent to a NeoPixel it is pre-pended with a 1 and post-pended with a 0.  This takes much more processing time, but saves RAM.  (This was the original implementation, and the code that contains all the bit shifts and boundary checking is magnificent.  Completely unreadable).

Neither of these solutions are optimal.  (Heck, they all really kind of stink).  Enter the little PLD that is part of the PSoC 4200.  Why not create a little state machine to pull bytes from a FIFO and create the framing in hardware.  That way, the processor doesn’t have to do any of the bit banging stuff.  As an added bonus, the PSoC has internal FIFOs that can be set for either 24 bits (RGB Neopixels) or 32 bits (RGBW Neopixels).  In that way, the processor just has to keep the FIFO from under-flowing and can use a single write to send a complete NeoPixel update value.  Very clean.

At this point, I’ve created a library that does all the NeoPixel heavy lifting with the state machine included and an interrupt to keep the FIFO from under-flowing.  In the OPP code, I will simply pre-allocate 3 bytes for every NeoPixel in memory, then as MPF or whatever framework wants to update the value of a NeoPixel, it just has to send the index, and the new value.  At this point I’ve just finished the library, and with the previous NeoPixel incarnation, I had already set up commands to change NeoPixel values.  I might update those commands to be less restrictive.

Using this library, the load on the processor should be as small as possible using the PSoC4.  Only thing that would be better is to buy a PSoC5 and set up a DMA to fill the FIFO.  That would mean the updates could happen without any processor intervention at all except for starting the DMA.

Here is a link to the Cypress question, and the final library solution that I posted.  Since I’m going to integrate it into the OPP firmware, there is probably little reason to click this link, but it does have the library.  Cypress Developer Community Link

8/30/2017 – Arduino revisited one more time

So things have been rather quiet at OPP central right now.  Got back from Pintastic, and I’ve basically done no real pinball related things.  I kinda miss it since I was spending so much time for a while.  Nobody is even posting on the BPA (Boston Pinball Association) except for the random machine for sale here or there.  Pinside has also been rather dull, but that is probably because I don’t know what threads are interesting.  There are some small email exchanges going on, but even those are mostly short and to the point.

So, pondering one evening, I thought to myself that I should create an Arduino shield since so many people use that platform.  As you’ve all read before, I hate the Arduino framework, but I actually thought that I liked the base processors a lot.  The one guy mentioned that I could simply program everything in C and ignore all the “helpful” classes that get rid of all the dirty nastiness of the processor.  (Of course that dirty nastiness is what allows you to squeeze so much performance out of these little processors).  My original thought is because it has more I/O it would be easier for people to use it in a centralized way.

Anyhow, last weekend, I started to do the base research of the feasibility and the value of creating such a shield.  So below I will do a little bit of a comparison with the PSoC 4200 that currently use for the OPP wings, and the Arduino Mega.

  1. Cost – PSoC $4, Mega $8.45 on Ebay.  The Arduino is more than twice as much.  This is buying off Ebay and trying to find the cheapest source.  If you went aliexpress you could get it for about $7, but I have never bought off aliexpress, so fear of the unknown factors in that a little bit.  You can probably assume about a 30 day wait if you buy from aliexpress.  PSoC is the winner here.
  2. I/O Pins – Mega 54, PSoC 36.  The mega definitely has more I/Os.  That makes it very attractive and probably why I started doing this exercise.  It was difficult for me to figure out the data sheet and find how many are dedicated I/Os versus general purpose I/Os.  I think that is mostly because the Arduino framework wants to configure certain pins to certain functions so people can create generic shields.  For a pinball controller, you mostly want raw inputs and outputs.  Some PWMs would be great also if they could be routed to the correct pins.  PSoC has 36 pins which 32 are generic I/Os, leaving 2 pins for communications (Tx/Rx), plus 2 more pins.  For me it is really the sweet spot of I/Os.  I can send a 4 byte (32 bit) value back to the host that gives the state of all the inputs.  At 54, I would probably choose to send back a 6 byte (48 bit) value back to the host.  That would make the host code a little more sticky.  Regardless, more is always better, so Arduino is the winner.
  3. Voltage – PSoC 5V, Mega 5V.  I like 5V I/O more than 3.3V and both processors support it.  At 5V you have more choices of MOSFETs so that is all good.  Interfacing to a Pi is more difficult, but level shifters are pretty easy to work with in this day and age.
  4. Flash – Mega 256K, PSoC 32K.  Mega has much more flash.  In the latest version of the OPP firmware, it is using approximately 70% of the flash of the PSoC.  If I had more codespace, I’m not sure what I would do with it.  I’m guessing the Mega needs more flash because of the Arduino framework.  That is not a big selling point to me as mentioned before.  Arduino is clearly the winner.
  5. SRAM – Mega 8K, PSoC 4K.  Once again, Mega has more resources including SRAM.  (Places where you store variables in your code, stack and heap).  The OPP original project worked with a processor that had 8K of flash, and 512 bytes for RAM.  (That included a bootloader that took 768 bytes of flash).  Again, Arduino is clearly the winner, but since I have enough resources, it doesn’t matter that much.
  6. Processor Frequency – PSoC 48 MHz, Mega 16 MHz.  OK, this is one parameter that I do care about.  With the addition of switch matrix support, the processor is starting to need to do much more processing.  With the PSoC it has plenty of headroom and I don’t worry about things like updating LEDs, sending responses to the host, reading the switch matrix, and firing solenoids all at the same time.  The running loop happens so quickly, that all of these things can happen within that loop time.  At 16 MHz, I’m not so certain.  When trying to decide whether to make an Arduino shield, this was the nail in the coffin for me.   PSoC is clearly the winner.

There is an Arduino Due.  It has the same number of I/Os, but changes the processor to 3.3V.  It has even more flash (512K) and RAM (96K), and it is running at 84MHz.   It also costs about $12.50.  As a processor, that is interesting.   As a pinball controller, it is just seems a little expensive to me.  (Going into this, I thought I was going to find an equivalent part to the PSoC at about $6 in the Arduino world, but I just haven’t found it.)

I’m glad I took the time to learn more about the Arduinos, and do a little bit of research.  At this point, the idea is going back onto the shelf until the prices drop a little further.  I might ask for an Arduino Due for Christmas, just to play with it, but that’s what it is going to end up being, one more toy in the basement in a box.