in collaboration with Claire Chen, Arka Roy, and Lucas Moiseyev
I had,rather impulsively, decided to join this hackathon at around 2am six days ago. I have been given the honor to work with three very talented and capable Electrical Computer Engineering majors (Claire Chen, Lucas Moiseyev, and Arka Roy). My home department will always be Design, however I am currently pursuing a second major in Mechanincal Engineering. I’ve decided to leap headfirst into this hackathon, approaching it as an opportunity to stretch my legs in the engineering field and in a truly interdisciplinary environment.
We figured it would be beneficial for our team to brainstorm a couple ideas beforehand so we could avoid entering the hackathon completely empty-handed. Wednesday night, what we had expected to last one hour turned into a nearly five hour meeting, ending with the four of us defeated on the couch in Carnegie Mellon’s late night dining hall at 3am.
Come Thursday night, we reconvene and come up with a plan of action attached to several potential ideas involving both software and hardware. We knew we wanted to incorporate augmented reality, but it wasn’t until during the hackathon opening ceremony did we solidify our idea.
A 3D whiteboard that can be accessed through a mobile application.
It seemed so obvious, we were very surprised we couldn’t find anything quite like what we had in mind. Mobile app 3D modeling and 2D images to a 3D AR translation exists, but nothing quite like what we had in mind. We wanted something that could allow the user to quickly draw their ideas in an XYZ grid to better explain ideas. It has application in almost any field, from design to engineering; additionally it could be used as an education and communication tool.
Up to this point we were uneasy about using augmented reality because our limited experience with the software and hardware that would need to be involved. Neither Claire nor Arka have experience in Vuforia (the AR software) or Unity, but they were willing to learn and try, having been comfortable with other coding languages like C and Python as well as having participated in other hackathons in the past. Lucas serves as the resident hackathon veteran on the team, as he boasts participation in nine others within the past year. In contrast, I have no experience, but a somewhat decent working knowledge of hardware; I was genuinely planning on relying on my adaptability to remain a functional member of the team.
6:00pm, Feb. 10 2017: TartanHacks begins
Brainstorming continues and we are trying to work through how we are going to measure x,y,z coordinates and how to communicate them with the Android app.
6:56pm → first successful test of AR (on a laptop)
8:31 pm → hardware parts are obtained and preliminary plan is hatched
The plan involves four modules that would emit a high frequency sound at different times in response to a pulsing light that would be emitted from the master that the user would use to control where they were drawing. The modules themselves would be a speaker attached to a photoresistor that would detect the light emitted from the master and prompt the speaker to emit the sound. All four of these modules communicate with the android app over wifi or bluetooth (whatever would be easier software-wise).
9:22 pm→ first successful test of AR on the mobile platform
11:28 pm → questionable drawing capabilities
11:45 am → begin connecting all hardware on an Arduino Mega. We are sticking with the four sensor reference point method previously outlined, but using infrared emitters and detectors.
its midnight and we snack.
1:54 am, Feb 11, 2017→ after meeting with the hackathon mentors the software side of the team feels comfortable with their “draw” function.
the team is considering now to use two mobile phones as cameras and running open CV to get triangulation of a cubic April tag. Software is going to figure out how to get open CV functions onto mobile, run on the same timestamp, and overall understand how it works. We are concerned about timestamps from the two smartphones not syncing and thus causing inaccuracies in the triangulation. Hardware is working on building the drawing “cage” and the pen hardware/housing. It is unclear whether we will use a teensyduino or an arduino uno, but for now we are using the uno in conjunction with a wifi enabled node mcu. At this point we’ve scrapped bluetooth capabilities as wifi communication will probably interface better with Unity and open CV.
4:41 am → about 11 hours till deadline: Hardware is bringing the cage into solidworks and going to try to make a dxf file of the smartphone stands ready for the laser cutter. Hardware at this point is still dealing with bluetooth connectivity. Software is still dealing with bluetooth and OpenCV.
7:55 am → somehow we came up with a gameplan in approaching our issues with software as we asserted that our previous method was not getting anywhere. Hardware no longer needs to make a “cage” to contain the 3dimensional drawing, focus shifts to wand.
3:12 pm → struggle with the lightblue bean finally ends! We found it was much easier to use when we better understood it, as with most things. Software has figured out bluetooth communication at this point, but the mobile platform is no longer working. Vuforia is no longer working, but laptop simulations still perform the drawing in the xyz.
4:30 pm → we submit. We’ve finished the wand, but the AR isn’t as reliable as we would like it to be.
For what it’s worth, we did have a working mobile AR platform and user assigned image sensing capabilities in conjunction with our hardware, but unfortunately not at the same time of day. The team departed the hackathon with satisfaction in what we were able to accomplish and assurance that our idea is feasible.We were surprised when we were asked to present at the hackathon’s closing ceremony and even more so when we were awarded the Google sponsored award for “Most Likely to Make Tangible and Lasting Change. Truly a serendipitous 24 hours.