<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[hacking for love]]></title><description><![CDATA[writing code, making music, and building stuff for those we love]]></description><link>https://www.hackingforlove.com/</link><generator>Ghost 2.4</generator><lastBuildDate>Fri, 03 Apr 2026 11:07:31 GMT</lastBuildDate><atom:link href="https://www.hackingforlove.com/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[The mini closet standing desk (wood + plastic projects part 1)]]></title><description><![CDATA[Fourth Chapter - Of how we built a tiny standing desk amalgamating wood and plastic. The simplest projects can be some of the most enjoyable ones. ]]></description><link>https://www.hackingforlove.com/standing-desk/</link><guid isPermaLink="false">5c0f279c8d25d93df1b2a56f</guid><category><![CDATA[woodworking]]></category><category><![CDATA[3d-printing]]></category><dc:creator><![CDATA[Sebastián Estévez]]></dc:creator><pubDate>Tue, 17 Sep 2019 01:38:40 GMT</pubDate><media:content url="https://www.hackingforlove.com/content/images/2019/09/desk.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://www.hackingforlove.com/content/images/2019/09/desk.jpg" alt="The mini closet standing desk (wood + plastic projects part 1)"><p>My wife Melissa wanted a desk for our apartment. Ideally a standing desk where she could set up her workstation whenever she occasionally works from home. Rather than going out and buying a new desk and using up space in our small NY apartment, I had a better idea. We would combine her wood working abilities--first inspired by her dad and brother's skills and later developed in high school wood-shop--with our new found ability to make things out of plastic (my family and I have our own passion for wood which I will get into in a separate post).</p><p>My initial attempt at describing my hybrid wood + plastic vision was met with skepticism by my dear wife so I knew I had to refine my pitch. I share some background and the pitch I used to obtain her sign-off and collaboration in the following section.</p><p>This is the first in a two part, wood + plastic series of things we made in 2018. </p><h2 id="the-background">The background</h2><p>We have a closet that goes on the wall of our bedroom. It was a well intentioned gift from my mom. A gift against which, I hate to admit, I was vehemently and mistakenly opposed at first. I thought it would be too bulky, did not think it would be worth installing given the short term nature of New York city leases, and was generally opposed to it from some misplaced sense of masculine independence that I can't quite explain and now thoroughly regret.</p><p>Since then I have told mom more than once that the closet was a great idea along with profound and heartfelt apologies. Melissa loves it, and I hope that making it the center of this post will continue to counteract my ill advised words and deeds (sorry mom!).</p><p>With some effort, we were able to transport and reassemble the closet to our apartment when Melissa and I got married and moved in together.</p><figure class="kg-card kg-image-card"><img src="https://www.hackingforlove.com/content/images/2018/12/image-21.png" class="kg-image" alt="The mini closet standing desk (wood + plastic projects part 1)"><figcaption>Closet assembly in progress</figcaption></figure><p>Here you can observe an exhausted but satisfied Melissa who lies star-fished on the floor below our lovely and successfully assembled closet.</p><figure class="kg-card kg-image-card"><img src="https://www.hackingforlove.com/content/images/2018/12/image-20.png" class="kg-image" alt="The mini closet standing desk (wood + plastic projects part 1)"><figcaption>Star-fished Melissa</figcaption></figure><h2 id="the-pitch">The pitch</h2><p>The concept was to craft a tiny standing desk that could either stick out of the closet when in use, or act as just another shelf when idle. It would rely on pegs [dowels] for support (the same mechanism that holds up the ordinary shelves in the closet) and its key feature would be 3d printed supports that would either create two surfaces for when the desk was retracted, or a sort of vertical drawer-like tracks that would support it when deployed.</p><p>I alluded to this mechanism on a <a href="https://www.hackingforlove.com/3d-printing-primer/">previous post about 3d printing</a> and mentioned the app (3DC.io) that I used to make the mock-up shown below:</p><figure class="kg-card kg-embed-card"><iframe width="459" height="344" src="https://www.youtube.com/embed/4hHd6lLA-C4?feature=oembed" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe><figcaption>Standing desk pitch video</figcaption></figure><p>It only took me  a couple of minutes to craft the mock-up and once Melissa had seen how it would work with here own eyes, she was all in!</p><h2 id="the-bits">The bits</h2><p>I used openscad to generate the 3d model, I can't stress how much I love this tool. The fact that uses a programmatic approach may be daunting to some but it really is a pleasure to work on. </p><p>Here's a snippet of the code I used to build the front supports:</p><pre><code>inch  = 25.4;
pegDistance = 1.25*inch;
numPegs = 3;
margin=inch;
baseThickness = inch / 2;
pegRadius = 2.4;
pegHeight = 10;
channelWidth = inch*.75;

//Pegs
translate([margin,0,baseThickness]){
  for(i=[0: 1: numPegs]){
    translate([pegDistance * i,0,0])
      AddPeg();
  }
}

//base and support
difference(){
    length=pegDistance * numPegs + 2*margin;
    translate([0,-margin*.75,0]){
      cube([length,margin*1.5,baseThickness], center=false);
    }
    translate([0,-channelWidth/2,-1]){
      cube([length,channelWidth,baseThickness], center=false);
    }
}


module AddPeg (){
  difference(){
    cylinder(h = pegHeight, r = pegRadius);
    translate([-pegRadius,-.5,0])
      cube([pegRadius*2, 1, pegHeight]);
  }
}
</code></pre>
<p>You can see this time I used my own `module` and a `for` loop to add the pegs. Modularizing designs in this way is extremely useful when they have repeating features. </p><figure class="kg-card kg-image-card"><img src="https://www.hackingforlove.com/content/images/2019/09/image.png" class="kg-image" alt="The mini closet standing desk (wood + plastic projects part 1)"></figure><p>The slots in the dowels were meant to provide some give in case their radius did not exactly match the holes in the closet but I ended up finding that they weren't necessary and ended up making the dowels themselves rather brittle. </p><h2 id="the-atoms">The atoms</h2><p>I used PLA for this print and PLA is on the brittle and ended up having breaking one at one point. If I were to print these again I would use PLA+ from eSun. It prints a bit hotter but it's less brittle and generally easier to deal with. It's also quite affordable at around $20 per kilogram spool on Amazon and it's the only brand I've seen that ships with a refillable plastic spools. Once you have one, you can just buy the refills that slip right in, no risk of tangling up your filament.</p><p>Here's a photo of the supports when we first mounted them on the closet.</p><figure class="kg-card kg-image-card kg-width-wide"><img src="https://www.hackingforlove.com/content/images/2018/12/image-18.png" class="kg-image" alt="The mini closet standing desk (wood + plastic projects part 1)"><figcaption>Side view – supports</figcaption></figure><figure class="kg-card kg-image-card"><img src="https://www.hackingforlove.com/content/images/2018/12/image-19.png" class="kg-image" alt="The mini closet standing desk (wood + plastic projects part 1)"><figcaption>Isometric view – supports</figcaption></figure><p>It can be tricky to procure wood in Manhattan but we landed on a couple of good options. We needed someone that would cut the wood for us given the lack of power-tools and space in our Manhattan apartment. We used ordinary 3/4 inch Canadian birch plywood from <a href="https://midtownlumber.com/">Midtown Lumber</a> and had it cut to fit the closet perfectly. The staff is friendly, they advertise their ability to help customers with DIY projects, are happy to interact over phone, email or in person, and are located very conveniently to us in Chelsea.</p><p>Unfortunately, I have found over time that their prices are inexplicably bloated (about 2-3x what I can find elsewhere in the NYC). I have since started taking the subway to the Upper West Side to Mike's Hardware and Lumber to procure wood for other projects, they do good work and will quote reasonable prices on the phone and also deliver for a fee (though they don't have a web site).</p><figure class="kg-card kg-image-card"><img src="https://www.hackingforlove.com/content/images/2019/09/image-1.png" class="kg-image" alt="The mini closet standing desk (wood + plastic projects part 1)"><figcaption>Issues with the wood</figcaption></figure><p>The plywood had a few gaps and imperfections so we used a bit of wood filler to fix them up:</p><figure class="kg-card kg-image-card"><img src="https://www.hackingforlove.com/content/images/2019/09/image-2.png" class="kg-image" alt="The mini closet standing desk (wood + plastic projects part 1)"></figure><p>We picked up some metal angle brackets and short wood screws and assembled the desk by mounting them on one side of the supporting structure as shown below:</p><figure class="kg-card kg-image-card"><img src="https://www.hackingforlove.com/content/images/2018/12/image-17.png" class="kg-image" alt="The mini closet standing desk (wood + plastic projects part 1)"><figcaption>Mounted desk</figcaption></figure><p>As you can see, the desk slides right into the tracks. It left plenty of room for Melissa's macbook and can be adjusted to any height using the dowels. </p><figure class="kg-card kg-image-card"><img src="https://www.hackingforlove.com/content/images/2018/12/image-16.png" class="kg-image" alt="The mini closet standing desk (wood + plastic projects part 1)"><figcaption>Hard at work</figcaption></figure><p>Once we were happy with the desk, we finished and stained it to match the look of the rest of the closet. This is where Melissa's woodworking experience came in handy.</p><figure class="kg-card kg-image-card"><img src="https://www.hackingforlove.com/content/images/2018/12/image-22.png" class="kg-image" alt="The mini closet standing desk (wood + plastic projects part 1)"><figcaption>Conditioning</figcaption></figure><p>She used a rag to apply the primer. We used Miniwax pre-stain wood conditioner from our local hardware store.</p><figure class="kg-card kg-image-card"><img src="https://www.hackingforlove.com/content/images/2018/12/image-23.png" class="kg-image" alt="The mini closet standing desk (wood + plastic projects part 1)"><figcaption>Staining</figcaption></figure><p>We picked the darkest stain at the store, Miniwax wood finish penetrating stane Ebony 2718 and applied 3 coats to get the darkness we needed to match the closet.</p><figure class="kg-card kg-image-card"><img src="https://www.hackingforlove.com/content/images/2018/12/image-24.png" class="kg-image" alt="The mini closet standing desk (wood + plastic projects part 1)"><figcaption>one coat</figcaption></figure><figure class="kg-card kg-image-card"><img src="https://www.hackingforlove.com/content/images/2018/12/image-25.png" class="kg-image" alt="The mini closet standing desk (wood + plastic projects part 1)"><figcaption>two coats</figcaption></figure><p>Finally she applied polyeurethene with a brush, careful not to leave bubbles.</p><figure class="kg-card kg-image-card"><img src="https://www.hackingforlove.com/content/images/2018/12/image-26.png" class="kg-image" alt="The mini closet standing desk (wood + plastic projects part 1)"><figcaption>Finishing with poly-urethane</figcaption></figure><p>Here's a photo of the final product! </p><figure class="kg-card kg-image-card"><img src="https://www.hackingforlove.com/content/images/2018/12/image-28.png" class="kg-image" alt="The mini closet standing desk (wood + plastic projects part 1)"><figcaption>Final product</figcaption></figure><p>We were quite proud of this project. It was so simple and yet such a perfect fit for our needs. The entire thing cost us maybe $40 and we found working together on it much more satisfying than buying something at the store. </p><p>Sometimes the simplest projects are the best ones.</p>]]></content:encoded></item><item><title><![CDATA[Thank you cards and a face recognition prototype]]></title><description><![CDATA[Third Chapter - On how an image recognition prototype was assembled  to thank our guests and on the open source code it was built upon.]]></description><link>https://www.hackingforlove.com/thank-you-cards/</link><guid isPermaLink="false">5c26bf598d25d93df1b2a5ae</guid><category><![CDATA[machine learning]]></category><category><![CDATA[image recognition]]></category><category><![CDATA[face detection]]></category><dc:creator><![CDATA[Sebastián Estévez]]></dc:creator><pubDate>Sun, 13 Jan 2019 23:39:11 GMT</pubDate><media:content url="https://www.hackingforlove.com/content/images/2019/01/MS80418-995.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://www.hackingforlove.com/content/images/2019/01/MS80418-995.jpg" alt="Thank you cards and a face recognition prototype"><p>This post is about the start of a small effort to show our gratitude for those who were there supporting us on our wedding day and for those who could not be physically present.</p><p>I was predictably emotional on our wedding day. Among the wide array of feelings coursing through me, there were some which I did not expect. One of the more surprising was a feeling of invincibility. Invincibility may seem like a strange description of what you feel when getting married so I'll try to explain. The love and support from our family and friends gave us what felt like a protective aura, nothing could ever stop us.</p><p>Life is full of challenges and I don't expect our marriage to be an exception. Yet, it is clear to me that we care deeply about each other and are willing to put each other before ourselves. We are blessed with loving families and friendships and can count on their example and their support to get us through tough times.</p><h1 id="the-prototype">The prototype</h1><p>Our wedding photographer Dariuz does not limit himself. He took over eighteen hundred lovely photos at the wedding including the one we are using for our thank you cards:</p><figure class="kg-card kg-image-card"><img src="https://www.hackingforlove.com/content/images/2019/01/MS80418-1074.jpg" class="kg-image" alt="Thank you cards and a face recognition prototype"></figure><p>As Melissa handwrites the messages on the cards, I thought it might be a nice gesture to make personalized digital photo albums for our guests based on the photos they appear in.</p><p>The amount of automation achievable for a task like this one would have been limited just a few years ago but the field of image recognition has seen enormous improvements recently thanks to advances in machine learning. There are a ton of related resources available online (open source code, examples, libraries, tutorials, etc.) that I was happy to be able to leverage.</p><p>So far I put together a rough prototype in python and started working on productionizing it. </p><p>Overall, I was quite impressed with what can be accomplished with a search engine, a few scripts, a laptop, and a bit of tenacity. It is an exciting time to be coding, we truly stand on giant's shoulders.</p><p>Since this was my first foray into face recognition and image detection I decided to write this post to discuss the prototype and share what I learned putting it together. I plan to have additional posts describing further refinements as well as steps I am taking to productionize the app.</p><p>I'll try to reference the resources I used as accurately as possible as I write. I also include a list of sources and reading material in the References section.</p><p>The prototype's functionality is as follows:</p><ul><li>Detect and crop faces in preparation for scoring</li><li>Score photos to obtain a sort of image fingerprint</li><li>Calculate differences between image fingerprints to find matching faces</li></ul><p>For my initial take at this I'm using the following software libraries:</p><ol><li><a href="https://github.com/opencv/opencv">OpenCV</a> - An open source c++ library for computer vision with a python api</li><li> <a href="https://github.com/davisking/dlib">DLib</a> - An open source c++ library for machine learning also with a python api</li><li>Pre-trained image detection and alignment models from CMU's <a href="https://github.com/cmusatyalab/openface">OpenFace</a> project</li><li>Tensorflow - a machine learning framework used to load trained models and score predictions against them.</li></ol><p>I'll break down the different activities the prototype performs and show some example code, but first a quick detour on docker.</p><h2 id="sidenote-python-dependency-management">Sidenote: python dependency management</h2><p>To get around having to modify my operating system python installation, I ended up using docker to manage dependencies. I could have used python virtual-env but I knew I needed to dockerize anyway for production later on (more on this in a future post).</p><p>Here's my Dockerfile:</p><pre><code>FROM ubuntu:18.04
LABEL Maintainer=&quot;Sebastián Estévez &lt;estevezsebastian@gmail.com&gt;&quot;

RUN apt-get update
RUN apt-get install -y python-pip build-essential cmake \
  pkg-config libx11-dev libatlas-base-dev \
  libgtk-3-dev libboost-python-dev

ADD requirements.txt ./

RUN pip install -r requirements.txt

RUN mkdir foto-app

WORKDIR foto-app

COPY ./ ./

#CMD /bin/bash
CMD python foto-app.py
</code></pre>
<p><strong>Notes</strong>: This handles both OS dependencies like `libatlas-base-dev` and python libraries in `requirements.txt`. For debugging purposes I'll sometimes uncomment the /bin/bash CMD instead of the python CMD.</p><p>I keep the commands for docker build, run, stop &amp; remove, and exec in my bash history, easily accessible with ctrl-r and do most of my testing within the container.</p><pre><code>docker build -t foto-app . 

docker stop foto-app &amp;&amp; docker rm foto-app

docker run -p 5000:5000 -v ~/Pictures/wedding:/foto-app/images --name foto-app -d -t foto-app

docker exec -it foto-app /bin/bash
</code></pre>
<p><strong>Note</strong>: I'm mounting a host volume to hold my input photos and store my results. I also forward port 5000 to troubleshoot the service once I was satisfied with the prototype and started working on the service (more on this in a future post).</p><h2 id="where-are-the-faces">Where are the faces?</h2><p>The neural network for image fingerprinting I am using was pre-trained by folks at <a href="http://cmusatyalab.github.io">CMU</a> against zoomed in images of faces. As a result, the first step had to be detecting the faces in my raw photos and then aligning and cropping them in order to score them against the neural network. The resulting score, which I have been calling a fingerprint, is a distilled representation of the face, its most useful property being that it can be compared with another fingerprint to obtain a sort of distance or dissimilarity metric.</p><p>In short, I first had to determine where the face was in the photo before I could find out to whom it belonged.</p><h3 id="load-images-into-arrays">Load images into arrays</h3><p>First I load the image from a file [or a byte array in the case of a web service] using OpenCV's `imread` or `imdecode` functions respectively. </p><p>Both of these functions return a channel array representation of the pixels in the image formatted as BGR (for whatever reason blue is flipped with the red as opposed to the traditional rgb ordering used in most image and color formats).</p><p>Let's do an example:</p><p>We can create a 3d numpy array that represents a blue dot or pixel (blue = 255, green = 0, red = 0) and write it to a jpg </p><pre><code>manualMatrix = np.array([[[255, 0, 0]]])
cv2.imwrite(&quot;test.jpg&quot;, manualMatrix)</code></pre>
<figure class="kg-card kg-image-card"><img src="https://www.hackingforlove.com/content/images/2019/01/image-6.png" class="kg-image" alt="Thank you cards and a face recognition prototype"><figcaption>a blue pixel</figcaption></figure><p>Inversely we can read the file back and get the BGR numpy array:</p><pre><code>imgMatrix = cv2.imread('test.jpg', 1)
print(imgMatrix)</code></pre>
<p>Which returns:</p><pre><code>[[[254   0   0]]]</code></pre>
<p>Interestingly the blue value we got back turned out to be a bit less blue than the value in our original matrix; likely a result of jpeg lossyness.</p><p>Once we have this matrix representation of the pixels in the image (three values for blue, green, and red laid out in rows and columns), we can manipulate them with our image detection algorithms to find our faces.</p><h3 id="detecting-faces-in-the-image-array">Detecting faces in the image array</h3><p>OpenFace includes python code for dlib powered face detection and alignment (align_dlib.py). For my prototype I pulled the alignment code verbatim from <a href="https://github.com/cmusatyalab/openface/blob/master/openface/align_dlib.py">the openface repo</a>. I also downloaded the binary representation of the pre-trained face detection model they use from the <a href="http://dlib.net/files/">dlib website</a>.</p><p>Having done that, loading and running the face alignment functionality is trivial:</p><pre><code>from align_dlib import AlignDlib

# Initialize the OpenFace face alignment utility 
alignment = AlignDlib('shape_predictor_68_face_landmarks.dat')

#load real file and flip r and b
imgMatrix = cv2.imread('MS80418-999.jpg', 1)
imgMatrix = imgMatrix[...,::-1]

alignedImg = alignment.align(96, imgMatrix, 

alignment.getLargestFaceBoundingBox(imgMatrix))

cv2.imwrite(&quot;aligned.jpg&quot;, alignedImg)
</code></pre>
<p>and here is a sample result (also note that I didn't flip back my reds and blues so the image appears blueish upon rendering):</p><figure class="kg-card kg-image-card"><img src="https://www.hackingforlove.com/content/images/2019/01/aligned.jpg" class="kg-image" alt="Thank you cards and a face recognition prototype"></figure><p>Under the hood, openface's this function uses<a href="http://dlib.net/python/index.html#dlib.get_frontal_face_detector"> dlib's `get_frontal_face_detector`</a> to find the largest face in the image and then crops it's matrix representation. It turned out that this face detection method did not do exactly what I needed it to for my final implementation but it was good enough for the prototype.</p><h2 id="who-s-face-is-it">Who's face is it?</h2><p>At this point `imgMatrix` is ready for fingerprint extraction.</p><pre><code>global nn4_small2_pretrained
nn4_small2_pretrained = create_model()
global graph
graph = tf.get_default_graph()

nn4_small2_pretrained.load_weights('weights/nn4.small2.v1.h5')
</code></pre>
<p>In the snippet above, we load the OpenFace <a href="http://cmusatyalab.github.io/openface/models-and-accuracies/">pre-trained model</a> `nn4.small2.v1` using tensorflow. Python's `global` keyword makes the graph and the model accesible from multiple threads. This specific detail wasn't relevant for the prototype but would be for the final implementation.</p><pre><code>img = (alignedImg / 255.).astype(np.float32)

with graph.as_default():
  fingerprint = nn4_small2_pretrained.predict(np.expand_dims(img, axis=0))[0]
  print fingerprint
</code></pre>
<p>I normalize the color values in the matrix by dividing by 255 (the maximum value)  to get values between  0 and 1 and feed it to the predict function of the model. The result is an array containing 128 numbers that represent the face.</p><p>Finally, to find out how similar two faces are, we can take the sum of the square of the differences of each value in the two arrays.</p><pre><code>  diff = np.sum(np.square(fingerprint1 - fingerprint2))
  print diff
</code></pre>
<p>This diffference function allows us to empirically select a threshold above which we will call two faces a match.</p><p>Here are a few examples of matching pairs:</p><figure class="kg-card kg-image-card"><img src="https://www.hackingforlove.com/content/images/2019/01/1508-128.png" class="kg-image" alt="Thank you cards and a face recognition prototype"></figure><p></p><figure class="kg-card kg-image-card"><img src="https://www.hackingforlove.com/content/images/2019/01/1487-1227.png" class="kg-image" alt="Thank you cards and a face recognition prototype"></figure><figure class="kg-card kg-image-card"><img src="https://www.hackingforlove.com/content/images/2019/01/1429-285.png" class="kg-image" alt="Thank you cards and a face recognition prototype"></figure><p>Getting the threshold right can be tricky but in my case we don't have to be perfect. Below, you can see two of my cousin's kids. The robot could't tell them apart even though they are clearly different people, a boy and a girl. In the end, I'll be sending a single album to their family so the robot is off the hook on this one.</p><figure class="kg-card kg-image-card"><img src="https://www.hackingforlove.com/content/images/2019/01/1814-931.png" class="kg-image" alt="Thank you cards and a face recognition prototype"></figure><h1 id="opportunities">Opportunities</h1><p>I was quite happy with the results from the prototype. My future efforts will be focused on detection improvements to find and match more faces more acurately, on scaling the process accross CPUs and machines, and on integrating it with the <a href="https://developers.google.com/photos/">google photos API</a> (finally released last year).</p><h1 id="references">References</h1><p><a href="https://cmusatyalab.github.io/openface/">https://cmusatyalab.github.io/openface/</a></p><p><a href="https://www.learnopencv.com/face-detection-opencv-dlib-and-deep-learning-c-python/">https://www.learnopencv.com/face-detection-opencv-dlib-and-deep-learning-c-python/</a></p><p><a href="https://arxiv.org/abs/1512.02325">https://arxiv.org/abs/1512.02325</a></p><p><a href="https://github.com/spmallick/learnopencv">https://github.com/spmallick/learnopencv</a></p><p><a href="http://nbviewer.jupyter.org/github/krasserm/face-recognition/blob/master/face-recognition.ipynb?flush_cache=true">http://nbviewer.jupyter.org/github/krasserm/face-recognition/blob/master/face-recognition.ipynb?flush_cache=true</a></p><p><a href="https://www.learnopencv.com/deep-learning-based-object-detection-and-instance-segmentation-using-mask-r-cnn-in-opencv-python-c/">https://www.learnopencv.com/deep-learning-based-object-detection-and-instance-segmentation-using-mask-r-cnn-in-opencv-python-c/</a></p>]]></content:encoded></item><item><title><![CDATA[3d printing primer]]></title><description><![CDATA[Second Chapter - About the mechanical and software related concepts needed to understand 3d printing at a high level]]></description><link>https://www.hackingforlove.com/3d-printing-primer/</link><guid isPermaLink="false">5c0c129d8d25d93df1b2a412</guid><category><![CDATA[3d-printing]]></category><dc:creator><![CDATA[Sebastián Estévez]]></dc:creator><pubDate>Tue, 01 Jan 2019 15:11:11 GMT</pubDate><media:content url="https://www.hackingforlove.com/content/images/2019/01/IMG_20180410_103729.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://www.hackingforlove.com/content/images/2019/01/IMG_20180410_103729.jpg" alt="3d printing primer"><p>I decided to write a quick primer on 3d printing and how it works for the uninitiated.</p><h1 id="what-s-a-3d-printer-mechanically">What's a 3d printer mechanically?</h1><p>Most of today's 3d printers take plastic (in the form of a spool of filament), heat it up, and extrude it out a small nozzle.</p><figure class="kg-card kg-image-card"><img src="https://www.hackingforlove.com/content/images/2018/12/image-6.png" class="kg-image" alt="3d printing primer"><figcaption>Blue PLA spool and some liqor</figcaption></figure><figure class="kg-card kg-image-card"><img src="https://www.hackingforlove.com/content/images/2018/12/image-2.png" class="kg-image" alt="3d printing primer"><figcaption>Clogged Extruder - it took some time and understanding for me to start getting consistently good prints&nbsp;</figcaption></figure><p>The extruders sit on a frame that can move along two axis (x and y) over a print bed. Eithe rthe extruders or the print bed, depending on the printer, can also move allong the z axis.</p><p>This permits us to move the nozzle around slightly above the top of the print bed to make a plastic shape which we call a layer. Once the layer is done, we inch the nozzle a bit farther from the top of the print bed and repeat the process (usually printing ontop of the plastic we have already laid down. Eventually we end up with a 3d piece of solid plastic.</p><h1 id="what-software-is-needed-for-3d-printing">What software is needed for 3d printing?</h1><p>Folks have been making computer generated 3d shapes for years to support use cases including engineerign design, 3d animations (movies, video games, etc.), architecture and so forth. There is a plethora of tools that play in this space and they output binary representations of the desired shapes in 3d space. In the 3d printing world, the instructions that drive our extruder and print bed around and control the gear that pushes or retracts the filament are based on a digital representation of each of the layers required to make the 3d shape.</p><p>3d printing requires a bit of software, called the slicer, that takes 3d models and converts them into the set of instructions printers need to do their job. The good news is that althogh there are many tools multiple standards for 3d models, there's usually a way to get them converted into something my Slicer software can handle.</p><p>TL;DR - if the goal is to generate 3d shapes to print, we can pretty much use any 3d modeling software out there (from AutoCAD to Maya and beyond) and take it's output and pass it to our slicer.</p><h2 id="3d-modeling-software">3d modeling software</h2><p>From what I have seen so far, there are two main classes of software useful for generating 3d models. They are based on whether the desired output is something organic looking or artistic (something you might scuplt with your hands) or something functional and geometic (something you might want to design numerically or even program). I think you need one of each depending on what you're trying to make. The two main software packages I have setlled on (respectrively) are <a href="http://pixologic.com/sculptris/">Scupltris</a> and <a href="http://www.openscad.org/">OpenScad</a>.</p><h3 id="sculptris">Sculptris</h3><p>Scupltris is the free version of zbrush, a really nice 3d environment that reminds me of playing with clay as a kid. You start with a ball of material and the interface allows you to do things like pinch, smooth, draw, etc. all with your mouse. I had played a bit with zbrush many years ago and was happy to find that scupltris is free and pretty full featured (downside– they don't have a linux version).</p><p>Here's something I designed with sculptris:</p><figure class="kg-card kg-image-card"><img src="https://www.hackingforlove.com/content/images/2018/12/image-7.png" class="kg-image" alt="3d printing primer"><figcaption>rhino head</figcaption></figure><p>I am really happy with Sculptris for artistic / organic modeling. I really enjoy how intuitive their interface is and how quickly you can get obtain decent looking results as a beginner.</p><p>I am still considering getting into Blender at some point, perhaps not for 3d printing but VR/AR or animations. Blender is probably the most active open source 3d modeling project out there and arguably the most powerful (including commercial solutions). Some really spectacular art and movies have been built entirely on Blender. What's stopping me? I have tried it a couple of times and found the UI is a bit too complex and the learning curve too steep given my limited time. There is a new version in beta that apparently addresses some of these concerns so perhaps I'll give it another go sometime soon.</p><h3 id="openscad">OpenScad</h3><p>OpenScad is a wonderful very simple open source system that allows you to build 3d models programatically. You build your models using a programming language that has primitives for things like `cube` and `cylinder` and functions to manipulate them like `color`, `tansform`, and ` difference`. OpenScad ships with a convenient little development environment that allows you to preview your model as you go and render it when you're done. It also has a headless mode that you can call from the command line which I used heavily for one project so far.</p><p>There are plenty of CAD programs that would fall into the functional / geometric category but I really love the extensibility, simplicty, and precision of OpenScad's programmatic take on the problem. I don't see myself trying something else that is more pointy clicky in this category.</p><h3 id="3d-modeling-on-mobile">3d modeling on mobile</h3><p>These days you can do some quick and dirty modeling right on your mobile device. I spent some time on the ios and google appstores and my favorite by far is <a href="https://3dc.io/">3DC.io</a>. It's simple, kid friendly, and has a free version that is quite usable.</p><p>Here's some stuff I quickly whipped up on 3DC</p><figure class="kg-card kg-embed-card"><iframe width="459" height="344" src="https://www.youtube.com/embed/4hHd6lLA-C4?feature=oembed" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe><figcaption>closet desk pitch video</figcaption></figure><figure class="kg-card kg-image-card"><img src="https://www.hackingforlove.com/content/images/2018/12/image-5.png" class="kg-image" alt="3d printing primer"><figcaption>3d platypus named Pete</figcaption></figure><h2 id="a-slicer">A Slicer</h2><p>Once you have a 3d model, you need to turn in into a set of instructions for your printer to execute in order to make a piece of plastic.</p><p>The are a few options for slicing but my favorite is called Slic3r, which is an open source program with a relatively active community. Specifically I have been using the <a href="https://github.com/prusa3d/Slic3r">prusa 3d</a> fork which seems to be more actively maintained than the main Slic3r and it has a few nice features including variable layer height (you can set differnet print quality for differnet parts of your model).</p><figure class="kg-card kg-image-card"><img src="https://www.hackingforlove.com/content/images/2018/12/image-14.png" class="kg-image" alt="3d printing primer"><figcaption>Variable layer height</figcaption></figure><p>There are a few different instruction formats supported by slicers. My printer the FFCP supports `x3g` which is not natively supported in Slic3r so I also rely on a script to convert the supported `gcode` format into `x3g` format. Specifically for the FFCP I highly recommend <a href="https://www.dr-lex.be/info-stuff/print3d-ffcp.html#slice_down">Dr. Lex's site</a> especially for more advanced tips.</p><p>In the commercial side, I have heard great things about <a href="https://www.simplify3d.com/buy-now/">simplify 3d</a> though I have yet to try it myself.</p><h2 id="monitoring-and-management">Monitoring and Management</h2><p>I'm into monitoring systems in general so it didn't take me long to start looking for a way to remote submit and track print jobs. I quickly landed on <a href="https://octoprint.org/">OctoPrint</a> which is a nice web ui that allows you to kick off jobs (you can integrate it with slic3r directly so you can submit jobs from there). It's pretty full featured including things like a live video feed of your print bed (works with a regular webcam) as you're printing as well as automatically generating nice time lapses for completed prints.</p><p>It runs on linux, in docker, and on your raspberry pi. For the latter there's a nice pi image <a href="https://octoprint.org/download/">OctoPi</a> that's quite easy to set up and configure.</p>]]></content:encoded></item><item><title><![CDATA[Hacking for love - a new blog and a bunch of plastic names]]></title><description><![CDATA[First Chapter - Which deals with the first love hacking excursion I documented involving some names made out of plastic]]></description><link>https://www.hackingforlove.com/intro-post/</link><guid isPermaLink="false">5bd9e5178d25d93df1b2a37a</guid><category><![CDATA[3d-printing]]></category><dc:creator><![CDATA[Sebastián Estévez]]></dc:creator><pubDate>Wed, 31 Oct 2018 17:30:47 GMT</pubDate><media:content url="https://www.hackingforlove.com/content/images/2019/01/names-1.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://www.hackingforlove.com/content/images/2019/01/names-1.jpg" alt="Hacking for love - a new blog and a bunch of plastic names"><p>I've had this domain name and a few half written articles in my back pocket for a few months now. I decided to publish the first couple of articles to start off the new year and to try to keep adding content consistently throughout the year. Let's see how it goes. </p><p><em><strong>Happy 2019!</strong></em></p><h1 id="non-flowery-intro">Non flowery intro</h1><p>I thought about writing some flowery prose on why I'm starting this blog but I decided it's pretty self explanatory. I'm sharing stuff I build for people I care about and the details / code examples / etc. on how I built them.</p><p>For context, I subscribe to the <a href="https://www.youtube.com/watch?v=H-mQHPIhBzU">looser</a> definition of the term <em>hacking</em>. The <em>love</em> part, well it's just your everyday caring, nice, not creepy love for my friends and family starting of course with my wife Melissa.</p><p>Now let's move on to the part with the plastic names and how I made them.</p><h1 id="a-bunch-of-plastic-names">A bunch of plastic names</h1><p>For our wedding, I had a few [love hacking] projects I wanted to do and they turned out to be a lot of work. I ended up craming a bunch of activities into the last few dozen hours before the wedding. Thinking back, we probably could have gotten away without expending that effort and additional stress. Especially when combining projects with big life events, it is important to remember that this is supposed to be fun and that we do it to make folks happy. Sometimes giving up on a project or simply running out of time is okay. I had a bunch of help from my fiance (now wife) and from my sister to make this project happen. Fortunately in this case, the results were timely and quite satisfying!</p><h3 id="making-plastic-names">Making plastic names</h3><p>Melissa knew I had been wanting to get into 3d printing and got me a lovely printer for my birthday. One of the first projects I took on was to create the name cards for our wedding. Here's a photo of the final result:</p><figure class="kg-card kg-image-card"><img src="https://www.hackingforlove.com/content/images/2018/12/image.png" class="kg-image" alt="Hacking for love - a new blog and a bunch of plastic names"></figure><p>If you can't tell from this vantage point the little names are made of plastic and were glued on to our guest's seating cards.</p><h3 id="side-note-on-the-printer">Side note on the printer</h3><p>Some day I'll get Melissa to guest blog on criteria for picking your first additive manufacturing device. For now know that mine's a pretty nice one with a solid metal frame, a heated bed, and a nice sized print area. Even though I did not have to build it myself and it came practically ready to print out of the box, it was a bit tricky to use at first. </p><p>Fortunately there are many resources on the internet that helped me get going. One of the main factors that helped start producing successful prints was finding this <a href="https://www.dr-lex.be/info-stuff/print3d-ffcp.html#slice_down">website</a> by someone that uses the same printer I have (thank you Dr. Lex for sharing your wisdom and your 3gx generator).</p><p>I did a quick write-up on how 3d printing works and included some of the software I use for printing that might be relevant for the unfamiliar before continuing. Check it out <a href="https://www.hackingforlove.com/intro-post/www.hackingforlove.com/3d-printing-primer/">HERE</a>.</p><h3 id="how">How?</h3><p>We had about 150 guests that attended the wedding and the first step was generating 3d models of their names. I knew I needed to make my solution as automated as possible. This made running OpenScad in headless mode an attractive option.</p><p>OpenScad supports the `text` function. It takes a text string and generates a 3d shape that you can then extrude and manipulate.</p><p>Here's what the code looks like to spell my name in 1mm thick 3d letters, font size 10:</p><pre><code>t=&quot;Sebastián Estévez&quot;;
size=10;
text(t, size = size);</code></pre>
<p>Which results in the following model:</p><figure class="kg-card kg-image-card"><img src="https://www.hackingforlove.com/content/images/2018/12/image-10.png" class="kg-image" alt="Hacking for love - a new blog and a bunch of plastic names"><figcaption>my name default font</figcaption></figure><p>It even supports importing custom fonts to achieve the desired look and feel. A good resource for free fonts is the <a href="https://github.com/google/fonts/tree/master/ofl">google fonts repo</a> on github.</p><p>Here's my name using Allura-Regular</p><pre><code>use &lt;/home/tato/Downloads/Allura-Regular.ttf&gt;
t=&quot;Sebastián Estévez&quot;;


size=10;
spacing=.92;
text(t, size = size, font = &quot;Allura-Regular&quot;, spacing=spacing);</code></pre>
<figure class="kg-card kg-image-card"><img src="https://www.hackingforlove.com/content/images/2018/12/image-11.png" class="kg-image" alt="Hacking for love - a new blog and a bunch of plastic names"><figcaption>my name font and spacing</figcaption></figure><p>At this point I had my POC just needed to iron out the details. As usual, these "details" ended up taking up 80% of my time and effort. The main challenges were connecting letters, rounding edges, and parallelization.</p><h3 id="connecting-letters">Connecting letters</h3><p>Depending on the name, the letters may or may not touch (and hold together as one) I didn't want tó have to glue more than one thing to the name cards, and even considered printing out clips or stands for the names.</p><p>I also wanted a way to have first names and last names tied together in a single part.</p><p>My initial take was to use bold bubbly letters that would smush together:</p><figure class="kg-card kg-image-card"><img src="https://www.hackingforlove.com/content/images/2018/12/image-3.png" class="kg-image" alt="Hacking for love - a new blog and a bunch of plastic names"></figure><p>They weren't super legible and didn't fit Melissa's aesthetics so I decided to go with something cursive and added a sort of underline to hold it all together. You can see my progression in the series of photos below:</p><figure class="kg-card kg-image-card"><img src="https://www.hackingforlove.com/content/images/2018/12/image-4.png" class="kg-image" alt="Hacking for love - a new blog and a bunch of plastic names"></figure><p>The trick to get the underlines is that not all characters in a given font occupy the same amount of space. For a quick demonstration, here's 50 i's and 50 m's:</p><p>iiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiii<br>
mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm</p>
<p>The right solution to this problem would have been to measure the length of the generated text and use that length to create the line underneath it. Unfortunately, this is not currently possible in OpenScad, see issue <a href="https://github.com/openscad/openscad/issues/1768">1768</a>. </p><p>What I ended up doing was starting with an average length per letter, multiplying by the number of letters and adding or subtracting a fudge factor based on the experimentally obtained lengths of the wider and thinner letters in the font I was using.</p><p>An important point about OpenScad is that it uses compile time (not run time) variable assignment. If variables are set more than once in a script, the final value is used in the entire script execution. Furthermore, variables cannot be used in their own assignments (this will result in an undefined compile time error). In practice this means that I ended up creating a globalFudge factor that was composed of the sum of a few letter specific fudge factors. A bit ugly but it worked.</p><p>I deliberately erred on the side of printing an underline that might be a bit longer because they are pretty easy to physically snip off with pliers once the print is completed.</p><p>Here are some examples of my fudge code:</p><pre><code>mSearchArray=search(&quot;M&quot;,t,0);
fudgeFactor=len(mSearchArray[0]);
</code></pre>
<p>...<br>
...<br>
...</p>
<pre><code>iSearchArray=search(&quot;i&quot;,t,0);
fudge5= fudge4 - len(iSearchArray[0])/4;
</code></pre>
<p>Needless to say, a different font would likely need it's own fudge factors.</p><h3 id="rounding-edges">Rounding edges</h3><p>I really wanted the names to be smooth and rounded in the corners. In reality, I ended up printing the names relatively small and with my printer resolution of about .2mm it probably did not make that much of a difference in the prints. Additionally, it added significant time to the generation of the models (not only during design but especially during rendering). </p><p>On the bright side, the technique I ended up using to make the rounded edges is interesting and will probably come in handy at some point so I don't consider it a total waste of time.</p><figure class="kg-card kg-image-card"><img src="https://www.hackingforlove.com/content/images/2018/12/image-13.png" class="kg-image" alt="Hacking for love - a new blog and a bunch of plastic names"></figure><p>The nice soft edges above are obtained by generating a sphere and basically rubbing it around the outside of the text. You can also think of it as placing a copy of the sphere centered at every point pixel that comprises the text.</p><p>This is called a Minkowski Sum and is conveniently available as a <a href="https://en.wikibooks.org/wiki/OpenSCAD_User_Manual/Transformations#minkowski">function</a> in OpenScad. </p><pre><code>minkowski(){
linear_extrude(height = height)
    text(t, size = size, font = &quot;Allura-Regular&quot;, spacing=spacing);
scale([1,1,1.5])
    sphere(r=radius, center=true);
}
</code></pre>
<h3 id="parallelization">Parallelization</h3><p>As you can see in the htop output below, computing the sum is time and CPU consuming and is limited by the fact that openscad is single threaded. </p><figure class="kg-card kg-image-card"><img src="https://www.hackingforlove.com/content/images/2018/12/image-9.png" class="kg-image" alt="Hacking for love - a new blog and a bunch of plastic names"><figcaption>OpenScad pegging a core – minkowski</figcaption></figure><p>In order to print our 150 guests in time, I ended up doing some multi process parallelization in bash and running them accross my home lab cluster. The minkowsky sum is also memory intensive so I did have to keep my overall core count down to avoid failed jobs and "<a href="https://plumbr.io/blog/memory-leaks/out-of-memory-kill-process-or-sacrifice-child">sacrifice child</a>" entries in `dmesg`.</p><pre><code>#!/bin/bash

set -x

#Stop any openscads that are running
ps -ef | grep -i openscad | grep output | awk '{print $2}' | xargs -r kill

#names.txt should have a name in each line
readarray -t NAMEARRAY &lt; names.txt

echo &quot;There are ${#NAMEARRAY[*]} names&quot;

cores=3
i=0
while [ $i -lt ${#NAMEARRAY[*]}  ]; do
 
    j=0
    while [ $j -lt $cores ]; do
        index=$(( $i + $j ))
	NAME=${NAMEARRAY[$index]}
        if [ ! -z &quot;$NAME&quot; ]; then
	    echo &quot;generating &quot; $NAME 
            cat name.scad | sed &quot;s/^t=.*/t=\&quot;$NAME\&quot;;/&quot; &gt; model-${NAME}.scad
	    openscad -o output-$NAME.stl model-$NAME.scad &amp;
        fi
        j=$(($j + 1))
    done
    wait
    i=$(($i + $cores))
done
</code></pre>
<h2 id="next-steps">Next steps?</h2><p>If I find myself doing more render intensive model generation, I might step my game up and stand up a k8s service for OpenScad.</p><p>That's it for today folks, hope you enjoyed my first post!</p>]]></content:encoded></item></channel></rss>