11/29/2017 – Let’s talk about Serial Ports

Serial ports still make the world go ’round whether you believe it or not.  Serial ports were created on the second or third day depending on the translation of Genesis that you are reading.  Serial ports can be fun little wires that connect one micro-controller to another micro-controller, and let’s talk a little bit about them.

Back in the good old days, every single computer in the world had one or even two RS232 ports.  Why?  So you could connect to that modem.  So you could connect two PCs together to transfer files (since it was faster than floppy disks).  You get bonus points if you have ever had to transfer data between computers using a null modem connector (pin 2 – pin 3 crossover) and just serial ports because nobody had network cards in those days.  Oh, those were the days!

Why are we talking about serial ports you may ask?  Well, because even before there was OPP hardware, there were calculations on serial bandwidth and would it be enough.  There is no reason to start a project unless you first answer a couple of feasibility sanity questions and make sure what you are trying to do makes some semblance of sense.  (This is why the propane powered snow melter for driveways never made it beyond the conceptualizing stage.  The thermodynamics for that project didn’t pass muster).

The original Gen1 cards (using an HCS08 processor) ran serial communications at 38,400 bps.  They were cute little cards that could support 16 inputs, or 8 solenoids with 8 inputs.  There was no mix and match of capabilities.  You could not make a 4 solenoid Gen1 card because, that card would have needed a different layout that I never created.  At 38,400 bps, the cards could send and receive 3,840 bytes/sec.   That doesn’t sound like much, but assuming polling every 10 ms, there were 38.4 bytes to send/receive data.  To receive data an input card, the message would be 5 bytes in length.  To receive data from a solenoid card (i.e. read the switch states), the message would be 4 bytes in length.  That means that there is enough time to service a 4 input card + 4 solenoid card system.  That would means there are 96 total inputs ((4 * 16) + (4 * 8)) for the system which seemed more than adequate to me.   I didn’t really imagine putting 32 solenoids in a system and thought that could easily be dropped to 24 solenoids or lower which gave me more breathing room on the serial port.  (Only 32 bytes per 10 ms).

So that design was assuming that I would poll the inputs every 10 ms.  10 ms seems pretty slow, right?  Wouldn’t I need to poll the inputs faster than that?  Well, how I figured it out was that I probed a pinball machine and looked at the rate of the column strobe for the switch matrix.  It is now a little simpler, and I’ll instead refer to online information.  According to the PinballRehab website, the switch matrix is strobed every 2 ms (500 times/sec), so if there are 8 columns, a read for each input is made every 16 ms.  What you say?  The processor only reads the inputs once every 16 ms or 62.5 times per second?  Sorry that is it.  Of course, the OPP Gen1 boards sampled the inputs much faster, did debouncing in software, and could detect and report on edges so even if the input wasn’t polled fast enough, it could “hold” that an input had occurred until the next read command came in.

So when Joe proposed the Gen2 boards, things changed slightly.  Since the processor moved to a Cortex-M0 based processor, the boards could now support 32 inputs on a board, or 16 solenoids + 16 inputs per board.  (Basically a doubling of the density).  The price per system dropped further because less boards needed to be purchased, wing cards could now be mixed and matched so the exact correct board could be located at the correct location, and the serial speed went up to 115,200 bps or a 3x increase in speed.  The great part is that this change in technologies did not change any of the physics of the game and how quickly inputs had to be read.  Inputs were still occurring at the same rate, so…this just gave more head room.  With Gen1 cards and the serial port, I felt I was a little too close to the edge of not having enough bandwidth.  Yeah, SharpeShooter 3 worked perfectly, but that isn’t the most full featured game.  With a 3x speed increase, I felt really good about the bandwidth on the serial port.

The Gen2 processor boards came with a USB to serial port converter chip on them.  That now eliminated the need to have an RS232 electrical level to 5V level converter.  Since some newer computers don’t have serial ports, it would have been necessary to buy a USB to serial converter and then convert the RS232 to 5V levels which seemed like too many adapters in my opinion.  (Most modern SOCs do have UARTs built into them.  This includes the RaspPi, BeagleBone, etc.)

So why am I talking about serial ports?  It basically stems from the fact that the default MPF configuration tries to request information on the serial bus too quickly.  (Maybe the default is to poll every 1 ms.  I believe Jan has reduced the default polling rate for OPP, but I’m not 100% certain)  Old versions of MPF requested information so quickly that it saturated the serial link and eventually data is lost on the serial link which causes data byte errors.  The original plan was that the link would never be fully utilized…but what if.

As with all errors, I wanted to see where the errors occurred if the link was completely saturated.  The serial communications goes from a host computer through a USB to serial converter, to a 115,200 bps serial link to the PSOC 4200, and then back again.  USB ports are significantly faster than the serial link, so I wanted to make sure that the USB was throttled properly.

For the first test, I started by setting up a quick Python serial test to send a large amount of data to USB virtual COM port and wrap it back without going to the PSOC 4200 processor.  That just tests the USB to serial port hardware and driver.  I threw down 260K of data, and examined all of the data coming back to verify it was correct.  The test insures backpressure is properly happening when going from the faster USB to slower serial port.  I also watched the output buffer, and verified that it was backing up because of the data going into the slower pipe.  That test worked 100%, which proves that the backpressure works and the buffering isn’t limited by a hardware buffer inside the USB to serial port chip.

The next test that I ran sent all of the data to the PSOC 4200, but used a card address that doesn’t match the current card.  That means that all the data simply passes through the card without processing the information.  I once again threw 260K of data, and examined the number of bytes received.  I got a .74% loss rate on the data.  Examining the data, I also found that I would lose a single byte at a time, and was not dropping large chunks of data and recovering.  That was good news because it helped prove my theory that it was the internal buffer in the PSOC 4200 that was overflowing and losing data.

So why is saturating the link so bad?  It means that as commands are sent to the card, they are buffered in internal memory for longer and longer before actually getting sent to the hardware.  A quick analogy might help.  Let’s say that you have a sink that the faucet can fill faster than it can drain.  When you start, everything is great because the extra water just gets added to the sink which is the buffer.  But as it keeps filling, eventually it overflows and gets all over the floor.  (That of course ignores the overflow drain at the top of the sink.)  With a computer, it is even worse.  The commands being put into the system are buffered in a FIFO fashion (First In First Out).  As the buffer fills up more and more, the commands take longer and longer to get to the hardware, and suddenly when you tell a solenoid to kick, it takes 1/2 second or a whole second before that solenoid finally gets that command.  As your system runs longer, commands take longer and longer to occur since the buffer keeps getting larger and larger.  That is very bad.  Your system becomes much less responsive.

When first discussing this problem we batted around the idea of adding Xon/Xoff flow control.  There are two problems with that.  First problem is that it would not stop the buffer from continuously increasing in size and making the system less responsive.  (It would actually exacerbate that problem).  Second problem is that OPP commands are binary and as such the Xon/Xoff must be “escaped” to insure that data that is the Xon character can be distinguished from the actual Xon character.  This ends up meaning the amount of data needs to increase to add these escaped sequences.  (I believe that Fast and PROC both use text based commands.  Xon/Xoff are non-printable characters, so because of that they do not need to escape the characters.  The downside of doing that is that the protocol is much less efficient on the wire.  I.e. when they send a byte of data, they send two ASCII characters to  represents the bits.  To send a byte of ones, they would send two ‘F’ characters or in binary 0x46 0x46.  OPP simply sends 0xff and is done with it.  It means that the OPP protocol can be twice as efficient on the wire.  (The down side is that it is much less human readable.  I can’t just look at a string of characters going past and easily tell what is going on because many of them are unprintable characters.)

So while doing testing, I learned a couple more things about the buffering of the Cypress USB to serial port.  In python you can ask the serial port how many bytes are waiting to be transmitted (out_waiting parameter).  When making sure that this value is 0 (i.e. the buffer is empty), there are still up to 128 bytes of data in an internal buffer in the USB to serial port chip.  That is very problematic because you can’t assume that there is an idle time on the line by polling the out_waiting parameter.  (You need to wait for out_waiting to be 0, and then wait another 10 ms for the USB to serial port chip buffer to clear).  I also found that the serial link could handle bursts of 2160 characters without losing any characters.  That shows the amount of time it takes for the slips on the serial link to build up and eventually overflow the PSoC 4200 SCB buffers.

So long story short, what is the best fix for this?  I feel adding Xon/Xoff doesn’t correct the problem, and could lead to an even less responsive system.  (And it isn’t just because I’m too lazy to add Xon/Xoff and escaping to the protocol.  It just isn’t the correct fix).  I really believe that tuning the polling rate is the correct thing to do here.  I’ll look through the OPP MPF platform and see if there is an easy way to auto-tune the rate so others don’t need all this knowledge to make the right call on how fast to poll the hardware.  I gotta get in there anyway to add OPP RGB LED support.

Advertisements

11/14/2017 – Slowly continuing

Lot’s of small little projects happened over the last few months.  As stated previously, I’m taking a sabbatical from large pinball projects.  I added some functionality to the MPF interface which Jan merged into the MPF project.  I can’t even remember what that functionality was at this point.  (Maybe support for switch matrices?  Yeah, I’m going to go with that.)

In the last month, I’ve been writing a Qt application to optimize picking groups during pinball league nights.  I wrote up a whole article on why the method is better than what is currently out there including academic papers on the math behind it.  I then went through the math to figure out the margin of error of determining the best player.  Yeah, I’m nearly certain only Bowen would have been able to follow it, or all those statistics heads that read the blog (well, I doubt there are really any statistics majors reading this blog).  I was using the New England Pinball League (NEPL) attendance at Pinball Wizard Arcade (PWA) to see if the new method really made a difference.  After doing the math, it became evident that enough games weren’t being played during the season guarantee that the best players were placed in A division.  The method did improve the quality of the player that was placed in division A, but with the limited number of meets, it couldn’t guarantee that the absolute best players were placed in division A.

An easy way to understand this would be to think about the worst case scenarios.  Let’s say you have a league with 80 players and they attend every meet.  (That makes it easy because it is 20 groups of 4 players each).  Let’s say that 40 of the players are the Bowens of the world (excellent players), and 40 of the players are the Hughs of the world (sucky players).  Even though it is unlikely, when randomly assigning groups, all the Bowens could always be placed with other Bowens.  When the groups are split into A, B, C and D divisions, all divisions would be made up of half Bowens and half Hughs.

So the algorithm I worked on reduced the number of times that a single player will replay another player within a season.  It makes the above situation less likely, but it does not eliminate the possibility.  The only thing that can really eliminate the possibility is to insure that there are at least 20 meets in the season to guarantee that everybody gets to play everybody else.  As per the current NEPL rules, only the first 5 weeks determine which division you will be placed in.  So at the most, you will only play against 15 different opponents.  (With randomly assigning groups, it can be fewer players than this).

Just to state that more succinctly, if there are more than 16 people playing at a location, even with the best group picking algorithm in the world (but not relying on past wins/losses), the break down of the divisions cannot quantitatively put the correct people into the correct divisions.

The algorithm that I created reduces affects of the randomization, but it can’t overcome it because of the limited number of meets.  So basically, the algorithm is better, but there is no way that it can be perfect.

I guess it is like baseball to me.  I hate the whole idea that an umpire can call pitches, and “open up” or “restrict” the strike zone.  It drives me insane.  A computer can do a much better job, and there wouldn’t be the fear of umpire bias, or bad calls.  My wife likes the “human” aspect of the umpire calling strikes and balls, but I say, bah, humbug.

Anyway, the Qt project is now up in the repository.  With all of the above issues, I don’t know whether it is really valuable, but it was fun to write.

9/29/2017 – Open source is open source

This has come up again, and when people start yelling about others, it gets me a little angry.  Think of this blog post as another one of those where I’m just complaining.  It is only tangentially related to pinball, but it irks me so here we go.

I’m a big proponent of open source.  Why?  Because open source tools have enabled me to create a bunch of great things.  Things that would not have been possible without open source.  Let’s go over the tools that I use most days in my life:  gcc, linux, python, eclipse,  and millions of others.  That list doesn’t include many subprojects like kicad, gimp, audacity, etc.  That fails to mention open standards such as c, c++, IP, UDP which makes our world go around.  I guess that I’m not just talking about open source, but transparency and open standards.

Everything that I have produced for the Open Pinball Project (OPP) is under the GPLv3 license.   Why is that license important to me, and why not simply release the stuff as un-copyrighted?  An un-copyrighted work can be used by everyone (That’s good).  Here is the important distinction.  If you base your project off a GPLv3 project, it also most be released into the public domain.  This is referred to as copyleft.  Anything that is derived from the original GPLv3 copyrighted material must also be available for others derive further works.

So anyone can take anything in the OPP repository and use it for their own purposes.  The only stipulation that I really care about, is that if they do that, and make improvements, those improvements most also be available publicly so others can easily get the improvements and new works.  That ends up helping to increase the quality of the software or hardware in the future.

So why did I release it with that particular license?  OPP is months of hard work.  Maybe even years.  I have no idea how much time I have spent on it over the last 10 or 15 years.  I want others to be able to leverage my work so that they can learn from it, extend it, and generally use anything that they find useful.

I take copyrights very seriously.  Here is a strange fact that most people don’t know.  If you use the OPP pinball framework, sound can be either place in the sounds folder, or the sounds/copyright subfolder.  If it is in the copyright subfolder, it should not be saved to the repository and should simply exist on the physical machine.  For the Van Halen pinball machine, there are many Van Halen/Hagar songs that are played in the background.  That’s all copyrighted material, so I don’t have the rights to put that in the repository.  If the framework can’t find the sound file, and it is supposed to be in the copyrighted folder, it plays a standard sound clip stating “that is copyrighted”.

At one point a few years back, people were complaining that others were using their work in ways that it wasn’t meant.  I believe MPF (mission pinball framework) was originally branched from PROC work that Gerry, Mike, and some others did.  I think this was in regards to MPF supporting multiple pieces of hardware.  Don’t complain about that.  Feel honored that your hard work is getting new life and others are benefiting from using it.  That is the point of open source.  It allows others to use your sweat and tears and make something better than what you ever imagined.

People aren’t “stealing” your work…they are extending it.  Imitation is the highest form of flattery.  If you don’t want others to use your work, don’t release it as open source software/hardware/whatever.  Keep it closed and locked away, and I really do feel that your stuff will suffer when an open source solution comes along.  The reason that so many people use the PROC platform is because it is relatively open, and it has a well defined interface.  It has a skeleton framework that others can use to create their own projects.

So why did I get on this rant?  Strangely it is because of Ben Heck.  Ben Heck and Mike from HomePin are having a little tiff.  The PinHeck system I believe was released under the Creative Commons – Share Alike (CC-SA) license.  (That holds many of the same copyleft attributes as GPLv3).  It seems that Mike, or people working for Mike, may have based some of his designs on the PinHeck system.

So a couple quick points.  Under that license you must attribute the original design in some way.  Yada, yada, this is based on such and such.  Regardless of the license, that is the right thing to do.  If that is what happened, just fess up Mike and say it.  There is nothing illegal and lock stock and barrel copying that design and using it for your own purpose because it was released under CC-SA.  Any fixes that he made to the design, he must publish them because it is a copyleft license.

So, I assume when Ben caught wind of this, he pulled the files from the server.  I haven’t looked lately if they are there.  The CC-SA license is irrevocable.  You can’t simply say, sorry, I now don’t want it to be open to the public.  Once open source, it is always open source.  The legal ramifications would be impossible to reconcile since others by design could have based their works on your work.

Pulling things off the server is absurd.  Once on the internet, it is always on the internet.  There is a project called the internet archive that lets you go back in time and find things that people have posted on the internet and then removed.  Even though Ben has removed the content, it is still out there.  I have all the design/art files that Ben posted for America’s Most Haunted (AMH).  They were put out there so others could extend his work and make mods and toppers.  They could even build another AMH if they wanted to spend the time.  They are still readily available on the internet archive.

So that gets me to the last point.  Ben mentioned Charlie caught wind of someone trying to build another AMH.  Maybe it was me, maybe it was somebody else.  The truth is I actually considered it.  Charlie, Ben, you can rest assured, that I have no desire to make an AMH from scratch.  That being said, it would be a technically interesting project, but I don’t feel it is worth the time or effort.

I’m still hoping for somebody to create a completely open source pinball machine from scratch.  Maybe in the next 10 years.

If anybody has more details on the origins of MPF, why Ben pulled the art files, etc, that is more correct, I will be happy to fix the above post or post corrections.  Most of this is third or forth hand information that I’ve gleaned from the internet so it is not very reliable.

9/21/2017 – Updated NeoPixel Library

This is the first step in a multi step process to update how NeoPixels are dealt with in the OPP framework.  This is going to be pretty techy centric, so I apologize in advance.  (Truth be told, I really have no idea who reads this blog, so maybe those two people like techy topics.  I just don’t know).

The original implementation of lighting NeoPixels using the PSoC used a SPI bus to create the protocol.  A single bit sent to a NeoPixel is a waveform which is high for 417ns, low or high depending on if the bit is 1 or 0 for the next 417ns, and low for the last 417ns.  If you add these 3 portions of the waveform together, a single bit takes 1.25 us to send.  The SPI bus is one of the simplest buses.  To send a 1, it sends a high for a clock cycle.  To send a 0, it sends a low for a clock cycle.  If I set the clock of the SPI so a full clock cycle is 417ns, I can have the hardware send a bit stream at the correct rate.  There is one problem.  Instead of sending a single bit, I have to send the framing portion of the protocol, so I end up sending 1×0.  (If I want to send a 1, I send 110.  If I want to send a 0, I send 100.  Simple).  Here’s the problem.  Making 3 bits out of 1 bit kind of stinks because it keeps crossing byte boundaries and such.  Really quite annoying.

One way to reduce this annoyance is to just store everything as 3 bits in RAM.  That means that each Neopixel requires 9 bytes (24 bits/Neopixel * 3 = 72 bits or 9 bytes).  As mentioned in the last post, the PSoC 4200 only has 4K of RAM, so that disappears pretty fast.

Another way is to generate the framing on the fly.  So every time that a bit is sent to a NeoPixel it is pre-pended with a 1 and post-pended with a 0.  This takes much more processing time, but saves RAM.  (This was the original implementation, and the code that contains all the bit shifts and boundary checking is magnificent.  Completely unreadable).

Neither of these solutions are optimal.  (Heck, they all really kind of stink).  Enter the little PLD that is part of the PSoC 4200.  Why not create a little state machine to pull bytes from a FIFO and create the framing in hardware.  That way, the processor doesn’t have to do any of the bit banging stuff.  As an added bonus, the PSoC has internal FIFOs that can be set for either 24 bits (RGB Neopixels) or 32 bits (RGBW Neopixels).  In that way, the processor just has to keep the FIFO from under-flowing and can use a single write to send a complete NeoPixel update value.  Very clean.

At this point, I’ve created a library that does all the NeoPixel heavy lifting with the state machine included and an interrupt to keep the FIFO from under-flowing.  In the OPP code, I will simply pre-allocate 3 bytes for every NeoPixel in memory, then as MPF or whatever framework wants to update the value of a NeoPixel, it just has to send the index, and the new value.  At this point I’ve just finished the library, and with the previous NeoPixel incarnation, I had already set up commands to change NeoPixel values.  I might update those commands to be less restrictive.

Using this library, the load on the processor should be as small as possible using the PSoC4.  Only thing that would be better is to buy a PSoC5 and set up a DMA to fill the FIFO.  That would mean the updates could happen without any processor intervention at all except for starting the DMA.

Here is a link to the Cypress question, and the final library solution that I posted.  Since I’m going to integrate it into the OPP firmware, there is probably little reason to click this link, but it does have the library.  Cypress Developer Community Link

8/30/2017 – Arduino revisited one more time

So things have been rather quiet at OPP central right now.  Got back from Pintastic, and I’ve basically done no real pinball related things.  I kinda miss it since I was spending so much time for a while.  Nobody is even posting on the BPA (Boston Pinball Association) except for the random machine for sale here or there.  Pinside has also been rather dull, but that is probably because I don’t know what threads are interesting.  There are some small email exchanges going on, but even those are mostly short and to the point.

So, pondering one evening, I thought to myself that I should create an Arduino shield since so many people use that platform.  As you’ve all read before, I hate the Arduino framework, but I actually thought that I liked the base processors a lot.  The one guy mentioned that I could simply program everything in C and ignore all the “helpful” classes that get rid of all the dirty nastiness of the processor.  (Of course that dirty nastiness is what allows you to squeeze so much performance out of these little processors).  My original thought is because it has more I/O it would be easier for people to use it in a centralized way.

Anyhow, last weekend, I started to do the base research of the feasibility and the value of creating such a shield.  So below I will do a little bit of a comparison with the PSoC 4200 that currently use for the OPP wings, and the Arduino Mega.

  1. Cost – PSoC $4, Mega $8.45 on Ebay.  The Arduino is more than twice as much.  This is buying off Ebay and trying to find the cheapest source.  If you went aliexpress you could get it for about $7, but I have never bought off aliexpress, so fear of the unknown factors in that a little bit.  You can probably assume about a 30 day wait if you buy from aliexpress.  PSoC is the winner here.
  2. I/O Pins – Mega 54, PSoC 36.  The mega definitely has more I/Os.  That makes it very attractive and probably why I started doing this exercise.  It was difficult for me to figure out the data sheet and find how many are dedicated I/Os versus general purpose I/Os.  I think that is mostly because the Arduino framework wants to configure certain pins to certain functions so people can create generic shields.  For a pinball controller, you mostly want raw inputs and outputs.  Some PWMs would be great also if they could be routed to the correct pins.  PSoC has 36 pins which 32 are generic I/Os, leaving 2 pins for communications (Tx/Rx), plus 2 more pins.  For me it is really the sweet spot of I/Os.  I can send a 4 byte (32 bit) value back to the host that gives the state of all the inputs.  At 54, I would probably choose to send back a 6 byte (48 bit) value back to the host.  That would make the host code a little more sticky.  Regardless, more is always better, so Arduino is the winner.
  3. Voltage – PSoC 5V, Mega 5V.  I like 5V I/O more than 3.3V and both processors support it.  At 5V you have more choices of MOSFETs so that is all good.  Interfacing to a Pi is more difficult, but level shifters are pretty easy to work with in this day and age.
  4. Flash – Mega 256K, PSoC 32K.  Mega has much more flash.  In the latest version of the OPP firmware, it is using approximately 70% of the flash of the PSoC.  If I had more codespace, I’m not sure what I would do with it.  I’m guessing the Mega needs more flash because of the Arduino framework.  That is not a big selling point to me as mentioned before.  Arduino is clearly the winner.
  5. SRAM – Mega 8K, PSoC 4K.  Once again, Mega has more resources including SRAM.  (Places where you store variables in your code, stack and heap).  The OPP original project worked with a processor that had 8K of flash, and 512 bytes for RAM.  (That included a bootloader that took 768 bytes of flash).  Again, Arduino is clearly the winner, but since I have enough resources, it doesn’t matter that much.
  6. Processor Frequency – PSoC 48 MHz, Mega 16 MHz.  OK, this is one parameter that I do care about.  With the addition of switch matrix support, the processor is starting to need to do much more processing.  With the PSoC it has plenty of headroom and I don’t worry about things like updating LEDs, sending responses to the host, reading the switch matrix, and firing solenoids all at the same time.  The running loop happens so quickly, that all of these things can happen within that loop time.  At 16 MHz, I’m not so certain.  When trying to decide whether to make an Arduino shield, this was the nail in the coffin for me.   PSoC is clearly the winner.

There is an Arduino Due.  It has the same number of I/Os, but changes the processor to 3.3V.  It has even more flash (512K) and RAM (96K), and it is running at 84MHz.   It also costs about $12.50.  As a processor, that is interesting.   As a pinball controller, it is just seems a little expensive to me.  (Going into this, I thought I was going to find an equivalent part to the PSoC at about $6 in the Arduino world, but I just haven’t found it.)

I’m glad I took the time to learn more about the Arduinos, and do a little bit of research.  At this point, the idea is going back onto the shelf until the prices drop a little further.  I might ask for an Arduino Due for Christmas, just to play with it, but that’s what it is going to end up being, one more toy in the basement in a box.

7/14/2017 – Youtube videos of talk available

If anybody is really bored this weekend, here are the videos of the talk that I gave on pinball electronics.  Goes from EM machines to what I call Gen3 machines.  Those who couldn’t make it to Pintastic 2017 might be interested.

I forgot to mention the total cost of the Van Halen machine.  All said and done, it cost about $480 to make it a reality.  That included everything from the base non-working Dolly machine to the two cards that I blew up and replaced because I kicked out the ground plug.

Without further adieu, here are the videos.  (It was broken into 3 separate files by the camera, and I don’t know how to join them together, so sorry about that.)

Look, it’s Dave Marston as the thumbnail!

 

7/12/2017 – Pintastic 2017, Now it is over

Pintastic 2017 has come and gone.  I am as happy to see it gone as can be.  Too much stuff trying to be forced into a small period of time.  It was all done.  Some could have been done better, but, everything was done.  Here is a quick run down:

Drove there Friday morning and met up with Dave Marston.  Made sure that he had my presentation, and we could display it on the screen.  That all went well.  Met with John C.(originator of pinball night back when I lived in CT), and we started the unload of the machines.  After about an hour and a half, both machines were set up in the free play hall.  That’s when I noticed an issue…I had forgotten to bring the boom box topper for Van Halen…

You can’t have a music pin without having any speakers, so it was back into the car to drive an hour each way to grab the topper from my house.  Doh!  My own fault for making sure all the machines were in the car, but not doing a final walk through to see if I had missed anything.

We got back to Pintastic at noon and finally got the machine all together.  One issue with Pintastic is that the free play room is amazingly loud.  Ear splitting loud.  My kids don’t want to go to Pintastic because it is so loud in the main room.   In my basement, the speakers are so loud on the pins, that I only turn the amplifiers up to about 1/4 power.  At Pintastic, I needed to turn the amplifiers the whole way up, and the call outs could not be heard clearly over the din.

Back to the machines.  Van Halen was running perfectly, but SharpeShooter was having issues with one of the solenoid cards.  After an hour, I finally traced it to the fact that the high power ground for that card wasn’t working properly.  I could have tried to jumper the wire to another ground wire, but then I pulled out the molex connector and squeezed it slightly.  That was enough to make the connection, and everything was working properly.  These are the standard molex connectors used for PC power supplies where the connector on the board is a pin, and the wires from the playfield surround that pin.  I have never seen those fail before, but it could be because I tugged the wires too hard and bent the circular pin.  It was an easy fix once I figured it out.

While the Van Halen pin never had any problems, players could not hear the callout to tell them how to choose David Lee Roth or Sammy Hagar as the lead singer.  If I had it to do over, and wasn’t so frazzled from fixing SharpeShooter, I would have simply defaulted the game to randomly pick the lead singer.  As it was, people many times started a four player game before pounding on the buttons enough to start the actual game.  If I was standing there, I would explain how to start the game, but most of the time, I wasn’t standing beside the machine.  Van Halen actually ran all weekend and never needed to be reset.  That is a testament to that little Pi and how well it worked.

So now everything was working, and we decided to go to lunch.  (Went to BT Smokehouse which is by far the best restaurant in the area.  Much better than the Oxhead Tavern).  Got back, and found out that every once and a while SharpeShooter lost USB communication.  That was one of the changes that I made between two years ago, and this year.  I didn’t have time to fix it properly, so I threw a keyboard and mouse onto the USB bus which fixed the issue most of the time.  Now that it is home, I can fix it properly.

By the time we got back from lunch, it was almost time for the seminar.  I sat in on the last half of the seminar on making a 60 in 1.  The guy seemed really knowledgeable and seemed like he had been building them for years.  After that, I did my seminar on general pinball electronics.  I have to give a shout out to Richard K.  for giving his opinions on the best way to clean contacts and controllers in EM machines.  The talk ran a little bit long, and we didn’t get to the last few slides.  It sounded like there was a mixture of people interested in pinball electronics, and were building their own machines.  I gave away a bunch of free stuff and hopefully everyone was happy.

After the talk, I finally got a chance to go play some pinball.  We ended up playing until the close of the freeplay room.  We went into the VIP lounge but it was even louder in there.  There were so many people, and the room was so small, that we just walked right out.  We adjourned to the bar, had a couple of beers, and then closed out the night.

Next day, machines were up and running properly, so I got to spend more time playing a ton of EMs.  I love the old EMs.  They are certainly my favorites when going to a show like this.  There was a Zaccaria Time Machine that I really wanted to play, but it had a failure and wasn’t running.

At 11 am, I walked a couple people through what the inside of the machines looked like, and how the boards worked.  It ended up only being a couple of people from Manchester, NH, but I was glad to see their interest.

At about 3:00 pm, I ran out of steam, and decided to pull the machines and go home.  Through my own mistake, I had never really put the machines officially in the free play room.  The nice part was that I could pull the machines a little bit early and not be charged a fee.  I went back to BTs for a last meal and drove home.  I was absolutely spent on Saturday, and then spent most of Sunday laying on the couch.

When I get a chance, I will put the videos of the talk up on youtube.  Both John and Derek K. recorded the talk.  That’s about how it went.