Muvicado Mindware

Category: blog post

  • Pentominoes – Photo to JSON and beyond

    Yesterday, I decided to start doing physical things for a while rather than playing and programming games in the digital world. So when I found this dual set of 12 + 12 pentominoes this morning in my pile of physical games, it seemed a no brainer to pick it up and noodle around with these shapes as I did years ago when I bought it (at Barnes and Noble, I think).

    I spread out the tiles on my lap desk.

    12 + 12 + 1 foam pentominoes

    Sorted them in two groups, with every second pair swapped. Then took a picture with my phone, to see how hard it would be to bring these shapes into the digital domain.

    photo – straightened and cropped

    The other day I created a QR code with Gravatar, and got carried away trying to find all the black dots programmatically, and it was fun. Since it seemed it should be easy to bring this pentomino data in for a photo, today I thought I’d try to pull this layout into the computer.

    I prompted ChatGPT:
    “Polyomino image data extraction. Analyze this image. Produce an array of ASCII characters representing the green (‘G’) and purple (‘P’) squares. Other squares (‘-‘). The grid size is 34 x 10.”

    Here’s the extracted 34 × 10 ASCII grid representing the image:

    GGGGGGGGGG------------------------
    GGPPP--GGG--------------------G---
    G-PPP---GG------------------GG-G--
    G-PPPPP--G------------------------
    G-PPPPPPGGGG----------------------
    G--PP---GGGG----------------------
    G--PG---GG----------G-------------
    G--G-----G---------GG-------------
    G------------------G--------------
    GG-G-----G-GG-----G---------------

    Each character corresponds to one cell:

    • G = green square
    • P = purple square
    •  = background / empty space

    That result is pretty close, but the purple background squares are mostly interpreted as purple. Try setting a baseline for the RGB values of two sample reference cells, by sampling and averaging the center 9 pixels of both reference cells. The top left corner cell is green. The top right corner cell is purple.

    Got it — here are the baseline RGB averages for the two reference cells (averaged over their 9-pixel centers):

    • 🟩 Green cell (top-left) ≈ (91, 201, 172)
    • 🟪 Purple cell (top-right) ≈ (143, 117, 165)

    Next, I can use these as color baselines and classify each cell by comparing its average color to the nearer of these two (using Euclidean RGB distance, with a neutral threshold to mark background as '-'). Would you like me to proceed with that refined extraction step now? 

    Yes
    Here’s the improved 34 × 10 ASCII grid, now using calibrated color references from the top-left (green) and top-right (purple) cells:

    ✅ Interpretation:

    • G = green pieces
    • P = purple pieces
    •  = background

    Would you like me to refine it further (e.g., adjust the neutral/background threshold or generate a color overlay image showing each detected cell)?

    That is pretty close!
    Would adding a 3rd reference cell improve the classification? Try averaging the RGB values of all the cells you classified as background, and use that as the reference color for background cells. Then run it again, using the previous green and purple reference colors and the new background reference color.

    Nice idea—adding a background reference helps. I computed the background color by averaging the center 9 pixels of every cell previously classified as -, then reclassified each cell by the nearest of the three references (green, purple, background).

    Reference RGBs (center-9-pixel averages):

    • Green (top-left): (91, 201, 172)
    • Purple (top-right): (143, 117, 165)
    • Background (from previous - cells): (65, 91, 76)

    Here’s the updated 34×10 ASCII grid:

    Legend:

    • G = green squares
    • P = purple squares
    • - = background/empty

    If you want, I can also export a small PNG overlay that shows the cell boundaries with their detected labels, so you can visually verify cell-by-cell.

    Awesome idea!

    I eventually ended up with some nice, annotated images.

    It made me think I could use this concept in a polyomino game app idea I’ve been toying with.

  • Reverse Maze

    The screen shot below shows what the Reverse Maze app looks like (early version). The tabs reveal different maze sizes. The tab also provides a few controls along the left side.

    See the Reverse Maze page for more info, including how to try out the app.

  • Molecule Maker

    Version 1.0 for iOS will be released soon on the App Store!