<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[devpassion Tech Insights, Open Source & Personal Growth]]></title><description><![CDATA[Explore technical deep-dives into PHP, MySQL, and Git, alongside honest personal reflections on productivity, mental health, and the daily life of a developer.]]></description><link>https://joebordes.com</link><generator>RSS for Node</generator><lastBuildDate>Fri, 17 Apr 2026 17:59:28 GMT</lastBuildDate><atom:link href="https://joebordes.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[The Relentless Push]]></title><description><![CDATA[This past week, I found myself at APIsec conference 2026, a conference I heartily recommend for API builders in general and API security responsibles in particular. This year was a buzzing hive of inn]]></description><link>https://joebordes.com/the-relentless-push</link><guid isPermaLink="true">https://joebordes.com/the-relentless-push</guid><category><![CDATA[life]]></category><category><![CDATA[Absurdism]]></category><category><![CDATA[AI]]></category><category><![CDATA[Career Growth]]></category><category><![CDATA[Philosophy]]></category><category><![CDATA[personal development]]></category><dc:creator><![CDATA[Joe Bordes]]></dc:creator><pubDate>Sun, 22 Feb 2026 18:39:54 GMT</pubDate><enclosure url="https://cloudmate-test.s3.us-east-1.amazonaws.com/uploads/covers/6103019e9036db5b3513b0de/3106ac15-c017-4bf9-8f6d-03cb9b6cd7cf.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>This past week, I found myself at <a href="https://www.apisec.ai/">APIsec conference 2026</a>, a conference I heartily recommend for API builders in general and API security responsibles in particular. This year was a buzzing hive of innovation and rapid-fire AI revelations. The air crackled with new ideas, groundbreaking advancements, and the palpable excitement of a future being built in real-time. As I listened to the talks, a familiar, unsettling feeling began to creep in. It wasn't just that the pace was frenetic, incredibly fast, even for someone who tries to keep up. It was a deeper, more profound sensation: <strong>the world was moving on, and I was being gently, but relentlessly, pushed out.</strong></p>
<p>It's a feeling I've come to associate with a vivid, recurring image in my mind – that of a giant arcade coin pusher machine. You know the ones: where a shelf full of coins slowly inches forward, pushed by a mechanical arm, until some of them tumble over the edge, creating a satisfying cascade and hopefully, a jackpot.</p>
<img src="https://cloudmate-test.s3.us-east-1.amazonaws.com/uploads/covers/6103019e9036db5b3513b0de/2b1882e7-400b-4e50-81a0-64697daf2091.jpg" alt="" style="display:block;margin:0 auto" />

<p>In my mental metaphor, however, the coins aren't inanimate currency; they're people. We are all on that platform, and the relentless march of time, technology, and societal advancement is the mechanical arm, pushing us forward.</p>
<p>As we get older, the world seems to accelerate its pace, widening the gap between what we know and what is becoming. Every day, the lexicon changes, new paradigms emerge, and the familiar ground shifts beneath our feet. We know less of what is happening, understand even less of what the changes truly mean, yet these changes fully affect us. The once-stable landscape of our careers, our social interactions, even our understanding of basic functionality, begins to erode. We are getting closer and closer to the edge, where the familiar drops away into the unknown.</p>
<p>In this human coin pusher, the front of the platform is often occupied by the older generations, those who have seen countless waves of change, and are now perhaps weary from the constant adaptation. They are the ones feeling the most immediate and profound force of the push, their accumulated knowledge sometimes deemed less relevant by the shiny new coins appearing further back on the platform. Their experiences, while rich and invaluable, can feel devalued in a world obsessed with the next big thing.</p>
<p>But it's not just age that determines your position. Sadly, the cruel logic of the coin pusher also means that some younger people find themselves pushed to the front prematurely. Perhaps it's an industry that became obsolete, a skill set no longer in demand, or simply an unfortunate turn of economic events. They, too, experience the jarring sensation of being rendered obsolete, even as their peers further back on the platform seem to be thriving in the new currents. The front edge of the coin pusher isn't exclusively for the old; it's for anyone caught in the wrong place at the wrong time, anyone who hasn't managed to adapt or find a new foothold.</p>
<p>The AI conference hammered this metaphor home. The sheer velocity of progress in artificial intelligence felt like the mechanical arm speeding up, its push becoming more insistent. New tools, new concepts, new ethical dilemmas – each presentation was another nudge forward, another reminder that the landscape of knowledge is being redefined at an unprecedented rate. My initial thought was one of frantic catch-up, a desperate attempt to grasp every new acronym and understand every algorithm. But then came the deeper realization: it's not just about keeping pace; it's about acknowledging the possibility of being pushed out, regardless of how hard you try.</p>
<p>This isn't a lament against progress, nor a call to halt the relentless march of innovation. It's an observation, a shared human experience in an ever-evolving world, nothing new. The coin pusher of life is a constant. The platforms will always move, new coins will always be introduced, and some will always reach the edge. Perhaps the true challenge isn't to avoid the edge, but to <strong>enjoy the mere existence of the game</strong>. An idea that has always felt comfortable to me, <a href="https://en.wikipedia.org/wiki/Absurdism"><strong>Absurdism</strong></a>, the philosophy championed by Albert Camus. Camus argued that there is a fundamental conflict between the human longing for order and meaning and the "silent," chaotic indifference of the universe.</p>
<p>Watching the coin pusher of society, we realize the machine doesn't care about our experience, our history, or our efforts to stay on the ledge. The push is indifferent. To look at the rapid, confusing advancements of the world and feel "pushed out" is to come face-to-face with the <strong>Absurd</strong>.</p>
<p><strong>So, what do we do when we feel that inexorable nudge toward the edge?</strong></p>
<p>Camus suggested that we shouldn't despair or retreat into false hope. Instead, we should live in <strong>defiant rebellion</strong>. We recognize that the machine will eventually push us over, but we continue to think, to create, and to observe the game with a wink. We find joy in the silver reflection of the coins and the hum of the motor, even as we move toward the drop.</p>
<p>Perhaps the true challenge isn't to avoid the edge, but to walk toward it with our eyes wide open. We may be getting pushed out of the world’s current "wave," but there is a strange, quiet freedom in acknowledging the ledge. As Camus famously concluded in <em>The Myth of Sisyphus</em>, as his protagonist watched his rock roll back down the hill for the thousandth time: <strong>"One must imagine Sisyphus happy."</strong> Even on the edge of the coin pusher, we can be happy—not because the machine stopped, but because it no longer has the power to surprise us.</p>
<img src="https://cloudmate-test.s3.us-east-1.amazonaws.com/uploads/covers/6103019e9036db5b3513b0de/e774a389-2b8d-48f8-902f-8cdd067a5e4b.png" alt="" style="display:block;margin:0 auto" />]]></content:encoded></item><item><title><![CDATA[The 60% Effort Rule in the Age of AI]]></title><description><![CDATA[Years ago, I established a set of guidelines for defining tasks in our system. The philosophy was simple: ambiguity kills productivity. We introduced concepts like the "Analyst-Reviewer Conversation" and the "60% Effort Rule" to ensure that by the ti...]]></description><link>https://joebordes.com/the-60-effort-rule-in-the-age-of-ai</link><guid isPermaLink="true">https://joebordes.com/the-60-effort-rule-in-the-age-of-ai</guid><category><![CDATA[task management]]></category><category><![CDATA[codereview]]></category><category><![CDATA[Collaboration]]></category><category><![CDATA[documentation]]></category><dc:creator><![CDATA[Joe Bordes]]></dc:creator><pubDate>Sat, 07 Feb 2026 19:22:05 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1770491680459/2cad3553-d229-4046-af9b-bfe729cbc09d.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Years ago, I established a set of guidelines for defining tasks in our system. The philosophy was simple: <strong>ambiguity kills productivity</strong>. We introduced concepts like the "<strong>Analyst-Reviewer Conversation</strong>" and the "<strong>60% Effort Rule</strong>" to ensure that by the time we started coding, we knew exactly where we were going.</p>
<p>Today, we are introducing a new player to the team: <strong>AI Code Reviewers</strong> (like <a target="_blank" href="https://www.atlassian.com/software/rovo">Atlassian Rovo</a>).</p>
<p>This doesn't mean we throw away our old process. On the contrary, our original structure is more relevant than ever. The AI simply changes <em>who</em> validates our work.</p>
<p>Here is the updated guide on how to define a task effectively, ensuring it satisfies both the human developer and the automated agent.</p>
<h2 id="heading-the-core-philosophy-can-i-do-this-without-programming">The Core Philosophy: "Can I do this without programming?"</h2>
<p>Before we even open a ticket, the "<strong>Analyst-Reviewer Conversation</strong>" must happen. The most important question remains: <strong>"Can I do this without programming?"</strong></p>
<ul>
<li><p><strong>Yes:</strong> Use existing settings, parameters, maps, or global variables. Do not write code if configuration will suffice.</p>
</li>
<li><p><strong>No:</strong> Then the goal changes. <strong>Can I build this feature in a way that allows us to do it <em>without</em> programming next time?</strong></p>
</li>
</ul>
<p>I consider this question a structural turning point in how we design features. Treating every client request as a potential configuration problem rather than a coding problem forces us to think in terms of <strong>capabilities</strong>, not <strong>patches</strong>.</p>
<p>Instead of implementing one-off logic, we ask whether the requirement can be absorbed into the system as a reusable mechanism. When the answer is yes, the application evolves by extending its own degrees of freedom: what previously required development becomes a matter of configuration. This is how software transitions from a collection of special cases into an adaptive platform. This is what coreBOS and EvolutivoFW are!</p>
<p>Consistently applying this principle changes the trajectory of the product. It encourages us to build infrastructure that preserves existing behavior while expanding what the system can express. Over time, this reduces future development effort, increases operational flexibility, and allows the application to respond to new business demands without continuous structural change.</p>
<p>In short, this question reframes feature requests as opportunities to increase the system’s long-term adaptability rather than merely satisfying the next requirement.</p>
<p>But this question raises another concern: “<strong>How do I know if it can be done without programming?</strong>”</p>
<p>If you are unaware that a global variable or a configuration setting already exists, you will naturally default to writing code, which is exactly what we want to avoid. Overcoming this "unknown unknown" requires a culture of <strong>Shared Knowledge</strong>. You cannot operate in a silo. You must feel empowered to <strong>ask</strong>—and specifically, to ask those teammates who have a history of sharing and mentoring.</p>
<p>This is where <strong>Training</strong> and <strong>Reading</strong> become your professional responsibility. You <strong>must</strong> actively study the system to understand the tools available to you. But the most critical piece of this puzzle is <strong>Documentation</strong>. You <strong>must</strong> write the documentation you wish you had found. Every time you solve a problem via configuration rather than code, <strong>document it</strong>. This acts as a "pay it forward" mechanism: it ensures the next person finds the answer in our knowledge base rather than reinventing the wheel in the codebase.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">Sharing Knowledge, Training, Asking, Documenting, Reading, and Learning are still as relevant as always. Even in times of AI taking over the world!</div>
</div>

<p>So, depending on the answer to the question of whether we can create new infrastructure or not, we arrive at one of two situations:</p>
<ul>
<li><p><strong>Yes:</strong> Ask permission and validation because this path will take more time than a direct hack solution. It is time you will save next time this requirement is needed</p>
</li>
<li><p><strong>No:</strong> Develop the easiest compatible solution</p>
</li>
</ul>
<p>If the answer is "<strong>We must code</strong>", the discussion that happened here holds the information we need to continue and create the ticket.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1770491373907/0c587dc2-c9a0-4a53-ad96-985038342c43.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-title-and-description-the-human-layer">Title and Description: The "Human" Layer</h2>
<p>The top half of the ticket is for the humans. It provides the "<strong>Why</strong>" and the context that an AI cannot fully grasp.</p>
<ul>
<li><p><strong>Title:</strong> This must be a concise summary. Ideally, this text should be clean enough to serve as the <strong>Commit Message</strong> later.</p>
</li>
<li><p><strong>Description:</strong> This is the detailed explanation. It should include steps, links to designs, screenshots, and the business context.</p>
<ul>
<li><div data-node-type="callout">
  <div data-node-type="callout-emoji">💡</div>
  <div data-node-type="callout-text"><em>Note:</em> You can write this in your team's native language. The AI doesn't strictly need to "understand" the business benefit (like reducing churn), but your team does.</div>
  </div>


</li>
</ul>
</li>
</ul>
<h2 id="heading-validation-the-ai-layer">Validation: The "AI" Layer</h2>
<p>This is the most critical update to our process. In my original guidelines, <strong>VALIDATION</strong> was a checklist for the human tester.</p>
<p><strong>Now, the Validation section is a prompt for the AI.</strong></p>
<p>When using tools like <a target="_blank" href="https://community.atlassian.com/forums/Rovo-for-Software-Teams-Beta/Introducing-acceptance-criteria-checks-in-Code-Reviewer/ba-p/3066586">Atlassian Rovo</a> or <a target="_blank" href="https://docs.github.com/en/copilot/tutorials/coding-agent/get-the-best-results">GitHub Copilot</a> for PR reviews, they look for specific instructions to verify the code against. To make this work, the Validation section must follow strict "Machine-Readable" rules.</p>
<h3 id="heading-how-to-write-the-validation-section-for-ai">How to write the Validation section for AI:</h3>
<ol>
<li><p><strong>Use the "Magic Words":</strong> The AI scans for a specific header. You must label this section <strong>"Acceptance Criteria"</strong> or <strong>"Definition of Done"</strong> (Case sensitive).</p>
<ol>
<li><div data-node-type="callout">
 <div data-node-type="callout-emoji">💡</div>
 <div data-node-type="callout-text"><em>If you are in a rush, Rovo also recognizes the standard shorthands AC, ACs, or DoD. I recommend you use your system’s templating engine to write this for you.</em></div>
 </div>
</li>
</ol>
</li>
<li><p><strong>With current tooling, Language Must Be English:</strong> Even if the rest of the ticket is in Spanish, the Validation criteria <em>must</em> be in English for the current generation of AI agents to verify it against the code.</p>
<ol>
<li><div data-node-type="callout">
 <div data-node-type="callout-emoji">⚠</div>
 <div data-node-type="callout-text">This is what I have read, which really surprises me, though, and probably will not be true if you are reading this in the (near) future.</div>
 </div>
</li>
</ol>
</li>
<li><p><strong>Be Binary (Pass/Fail):</strong> The AI compares the text to the code.</p>
<ul>
<li><p><em>Bad:</em> "Check that the API works well." (Subjective).</p>
</li>
<li><p><em>Good:</em> "API Endpoint <code>POST /v1/tickets/</code> exists and accepts parameter <code>id</code>." (Measurable).</p>
</li>
</ul>
</li>
</ol>
<h3 id="heading-example-the-translation-strategy">Example: The "Translation" Strategy</h3>
<p>We separate the <strong>Context (Description)</strong> from the <strong>Verification (Validation)</strong>.</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Ticket Section</td><td>Audience</td><td>Language</td><td>Content</td></tr>
</thead>
<tbody>
<tr>
<td><strong>Description</strong></td><td>Humans</td><td>Native (e.g., Spanish)</td><td>"We need to identify users who did X to prevent spamming them." (Why we are doing it).</td></tr>
<tr>
<td><strong>Validation</strong></td><td>AI Agent</td><td><strong>English</strong></td><td><strong>Acceptance Criteria:</strong> Create boolean field <code>has_done_x</code>. Endpoint <code>POST /x_done</code> sets this flag to <code>true</code>.</td></tr>
</tbody>
</table>
</div><p>Atlassian has some good recommendations</p>
<ul>
<li><p>Use AI to turn your brainstormed thoughts into clear, structured work item descriptions.</p>
</li>
<li><p>Use clear, unambiguous language.</p>
</li>
<li><p>Keep each criterion statement short and focused on one specific thing.</p>
</li>
<li><p>Break large epics into smaller stories with clear “done” conditions.</p>
</li>
</ul>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">Once you find a format and structure that works for your team, use it as a template so you can quickly reuse it for all your work items.</div>
</div>

<p>One last comment that is worth saying. Even in the time of AI all over the place, the rule of <strong>Garbage-In → Garbage-Out</strong> is still valid. AI does not reduce ambiguity; it amplifies it. If the criteria are vague, the review will be meaningless.</p>
<h2 id="heading-the-60-stop-and-ask-rule-is-now-automated">The "60% Stop and Ask Rule" is Now Automated</h2>
<p>My favorite rule has always been: <strong>"At 60% of effort, stop and ask: Am I finished?"</strong></p>
<p>The logic is that the remaining 40% of the effort is testing, <strong>documentation</strong>, and review. If you haven't finished the core logic by the 60% mark, you are off track.</p>
<p><strong>The AI now enforces this rule for us.</strong></p>
<p>When you open a Pull Request (usually around that 60-70% mark), the AI Code Reviewer scans your code against the <strong>Validation</strong> section. It instantly tells you:</p>
<ul>
<li><p>✅ Criteria Met</p>
</li>
<li><p>❌ Criteria Missing</p>
</li>
<li><p>⚠️ Manual Check Needed</p>
</li>
</ul>
<p>If the AI marks a criterion as missing, the answer to "Am I finished?" is objectively <strong>No</strong>. You don't need a senior reviewer to tell you that you forgot the database migration; the system catches it immediately.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1770491520876/d378d13a-694b-43a6-b4ad-73dc26a802da.png" alt class="image--center mx-auto" /></p>
<p>Remember to keep stakeholders informed. If the dates are slipping, notify your peers. <strong>Keep the conversation moving!</strong></p>
<h2 id="heading-effort-amp-priority">Effort &amp; Priority</h2>
<p>Finally, remember that <strong>Effort is super important</strong>.</p>
<ul>
<li><p><strong>Dates:</strong> We may hate deadlines, but we must notify the team if dates slip. <strong>Keep the conversation moving!</strong></p>
</li>
<li><p><strong>Priority:</strong> always urgent</p>
</li>
<li><p><strong>Tags</strong></p>
</li>
<li><p><strong>Stakeholders</strong>: informer, validator, …</p>
</li>
</ul>
<h2 id="heading-an-example">An example</h2>
<p><strong>Title:</strong> Implementation of Call to Action Flag</p>
<p><strong>Description (Context):</strong> To optimize marketing campaigns, we need to centralize the call-to-action status. Currently, there is a blind-spot whereas we do not know if the user has done the action or not.</p>
<p>The goal is to use our central application as a "Single Source of Truth" from where we can confidently determine if a contact should be included or not in an email marketing campaign.</p>
<p><strong>Acceptance Criteria</strong> <em>(Rovo will scan this section. Must be in English.)</em></p>
<p>1. <strong>Database:</strong> Add a boolean column <code>cta_done</code> to the <code>Contact</code> table.</p>
<p>2. <strong>API:</strong> Implement endpoint <code>POST /v1/contacts/{id}/cta-done</code></p>
<p>3. <strong>Logic:</strong> The endpoint must be accessible only by a user with a valid API token.</p>
<p>4. <strong>Logic:</strong> The endpoint must accept a JSON payload with <code>contact_id</code>.</p>
<p>5. <strong>Logic:</strong> When the endpoint is called, update <code>cta_done</code> to <code>true</code>.</p>
<p><strong>Business Goals (Manual Verification)</strong> <em>(kept separate so Rovo doesn't flag them as "Missing Code")</em></p>
<p>• ⚠️ <strong>Churn:</strong> Verify that users with this flag stop receiving emails.</p>
<p>• ⚠️ <strong>Conversion:</strong> Verify that marketing spend is focused only on users with a 0€ balance.</p>
<p><strong>Priority:</strong> Medium</p>
<p><strong>Dates:</strong> 2026-02-20</p>
<p><strong>Tags:</strong> API, Marketing</p>
<p><strong>Effort:</strong> 1 day</p>
<h2 id="heading-summary">Summary</h2>
<p>The goal of a task definition hasn't changed: we want to avoid wasted effort.</p>
<ul>
<li><p>The <strong>Description</strong> ensures the <em>humans</em> know <strong>why</strong> we are building it.</p>
</li>
<li><p>The <strong>Validation</strong> section ensures the <em>AI</em> can verify <strong>what</strong> we built.</p>
</li>
</ul>
<p>By being strict with our Validation criteria—using English and specific headers—we turn our ticketing system into an automated quality assurance engine.</p>
]]></content:encoded></item><item><title><![CDATA[The "Average" Problem: Why AGI is a Moving Goalpost]]></title><description><![CDATA[The pursuit of Artificial General Intelligence (AGI) — a concept I learned about this week — is often framed as a climb toward a distant mountain peak. We define it broadly, as an AI that matches or exceeds the sum total of human capability—the abili...]]></description><link>https://joebordes.com/the-average-problem-why-agi-is-a-moving-goalpost</link><guid isPermaLink="true">https://joebordes.com/the-average-problem-why-agi-is-a-moving-goalpost</guid><category><![CDATA[AI]]></category><category><![CDATA[agi]]></category><category><![CDATA[TechHumor]]></category><category><![CDATA[Futureofwork]]></category><dc:creator><![CDATA[Joe Bordes]]></dc:creator><pubDate>Fri, 30 Jan 2026 12:20:10 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1769775381163/8997c1f0-c254-40ed-8fec-e2677d82e11e.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The pursuit of <a target="_blank" href="https://en.wikipedia.org/wiki/Artificial_general_intelligence">Artificial General Intelligence (AGI)</a> — a concept I learned about this week — is often framed as a climb toward a distant mountain peak. We define it broadly, as an AI that matches or exceeds the <strong>sum total</strong> of human capability—the ability to reason like a scientist, create like an artist, and synthesize information like a polymath.</p>
<p>However, a quieter, more humbling realization is hidden in that definition.</p>
<p>If we shifted the goalposts and defined AGI as the <strong>average</strong> of human intelligence, the "intelligence explosion" wouldn't be a future event — it would be a retrospective one.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1769775264906/3efbba90-d7e7-4d7b-9ca0-a8d34ecc7fdd.png" alt class="image--center mx-auto" /></p>
<p>When we look at the "average" human baseline, we aren't looking at the ability to solve quantum equations or write symphonies. We are looking at a baseline that often struggles with media literacy, falls for basic phishing scams, and engages in circular logic on internet forums (without getting into what we are doing to the planet and with our political system 🤦‍♂️). By that metric, Large Language Models (LLMs) didn't just meet the average; they soared past it somewhere between GPT-3 and GPT-4.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1769775351569/8906f027-a8a8-47e8-8b73-2a713cca9af9.png" alt class="image--center mx-auto" /></p>
<p>The humor in this observation masks a technical truth: AI doesn’t need to be "God-like" to be transformative; it only needs to be slightly more competent than the median human at a specific task. We keep moving the goalpost toward the "Sum of Humanity" because if we admitted we’d already reached the "Average," we’d have to face the fact that our machines are already more "human" than we care to admit.</p>
<p>We aren't just building a mirror of our best selves; we are building a tool that proves the "average" was never as high as we thought it was.</p>
<p><strong>Stay calm</strong>. We aren’t being replaced by superintelligence yet; we’re just being joined by a very fast version of ourselves."</p>
]]></content:encoded></item><item><title><![CDATA[Stop Guessing, Start Measuring Your Git Repository]]></title><description><![CDATA[It started with a simple request: "Can you get me a report on our top committers?"
I searched for tools to extract statistical information from a git repository, but found that most were designed for version control, not analysis. I eventually ran in...]]></description><link>https://joebordes.com/stop-guessing-start-measuring-your-git-repository</link><guid isPermaLink="true">https://joebordes.com/stop-guessing-start-measuring-your-git-repository</guid><category><![CDATA[Git]]></category><category><![CDATA[repository]]></category><category><![CDATA[#reporting]]></category><category><![CDATA[Metabase]]></category><category><![CDATA[statistics]]></category><dc:creator><![CDATA[Joe Bordes]]></dc:creator><pubDate>Mon, 26 Jan 2026 19:21:56 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1769372910888/b458c64c-59b8-4b0d-990d-f60433fa6302.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>It started with a simple request: "<strong>Can you get me a report on our top committers?</strong>"</p>
<p>I searched for tools to extract statistical information from a git repository, but found that most were designed for version control, not analysis. I eventually ran into <a target="_blank" href="https://github.com/shenxianpeng/gitstats">gitstats</a>, which does a great job at extracting metrics but generates a <strong>static snapshot</strong>. I showed the HTML report to my client, and the inevitable happened: he started asking, "Can I filter this by month?", "Can I restrict that to the backend team?", "Can I see only the refactors?"</p>
<p>It was the normal human process: <strong>the moment you start seeing data, you want to ask it questions.</strong> But static reports can't answer new questions.</p>
<p>I realized that hacking filters into existing tools wasn't enough, but urgency reigns, so I <a target="_blank" href="https://github.com/shenxianpeng/gitstats/pull/158">added some filtering options</a> to gitstats and gave it to my client to cover the momentary needs. I was going to leave it there, but the <strong>ETL/BI engineer inside me</strong> was hooked. If we wanted real answers—like "Who was the most productive author of 2024?" or "Which files have the highest churn rate?"—we didn't need a report generator; we needed a <strong>data warehouse</strong>.</p>
<p>Thus was born <a target="_blank" href="https://github.com/joebordes/gitstatdb"><strong>GitStatsDB</strong></a>.</p>
<p>Today, I am releasing <code>gitstatdb</code>, an open-source ETL (Extract, Transform, Load) tool that turns your Git history into a structured MySQL database ready for dynamic reporting</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">If you have ever tried to extract meaningful management metrics from <code>git log</code>, <code>diff stat</code>, <code>ls-*</code>, and similar commands, <strong>you know the pain</strong>. While Git is fantastic for version control, it wasn't designed as an analytics engine.</div>
</div>

<h2 id="heading-the-concept-your-code-history-as-data">The Concept: Your Code History as Data</h2>
<p>The philosophy behind <code>gitstatdb</code> is simple: standard SQL is more powerful for reporting than Git commands. By extracting repository metadata and loading it into a relational database, we unlock the full power of BI tools like Metabase, Superset, or Tableau.</p>
<p>Unlike simple commit counters, <code>gitstatdb</code> captures the deep context of your project:</p>
<ul>
<li><p><strong>Complete History:</strong> Commits, normalized authors, and committers.</p>
</li>
<li><p><strong>Branch Evolution:</strong> Tracks all branches, including those that have been deleted, ensuring historical accuracy.</p>
</li>
<li><p><strong>File Forensics:</strong> Tracks every file change (insertions, deletions, modifications) and file renames.</p>
</li>
<li><p><strong>Merge Relationships:</strong> Automatically detects source and target branches for merges, making it easier to visualize workflow efficiency.</p>
</li>
</ul>
<h2 id="heading-under-the-hood">Under the Hood</h2>
<p>The database schema is designed for performance and detailed analysis. It features normalized tables for <code>author</code>, <code>repository</code>, and <code>branch</code>, linked via a central <code>commit</code> table.</p>
<p>We also include pre-calculated statistics tables (<code>repository_statistics</code>, <code>branch_statistics</code>, <code>author_repository_statistics</code>). This means that when you connect a dashboard tool, it doesn't have to crunch millions of rows in real-time—the heavy lifting is already done.</p>
<h3 id="heading-incremental-updates">Incremental Updates</h3>
<p>One of the biggest challenges with Git analytics is performance. <code>gitstatdb</code> supports <strong>incremental updates</strong>. After the initial import, you can run the tool daily (recommended actually); it detects new commits and processes only what has changed. It even detects when local branches have been deleted and marks them accordingly in the database.</p>
<h2 id="heading-visualizing-the-data">Visualizing the Data</h2>
<p>Once your data is in MySQL, the magic happens. You can connect tools like <strong>Metabase</strong> to visualize your repository's heartbeat.</p>
<p>I have created a set of advanced dashboards that track:</p>
<ul>
<li><p><strong>Authors of the Month:</strong> Ranked by impact and consistency (not just commit counts).</p>
</li>
<li><p><strong>Code Churn:</strong> Identifying "hotspots" in your codebase that are frequently rewritten.</p>
</li>
<li><p><strong>Project Velocity:</strong> Visualizing merge rates and active days.</p>
</li>
</ul>
<p>Watch this video to see how we use Metabase to explore a repository's history:</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://youtu.be/BlpatDYWAno">https://youtu.be/BlpatDYWAno</a></div>
<p> </p>
<h2 id="heading-installation-amp-usage">Installation &amp; Usage</h2>
<p>Getting started is straightforward. You can install it directly as a Python package:</p>
<pre><code class="lang-sql"><span class="hljs-comment"># 1. Install</span>
pip <span class="hljs-keyword">install</span> -e .

<span class="hljs-comment"># 2. Configure your database in a .env file</span>
echo <span class="hljs-string">"DB_NAME=gitstatdb"</span> &gt; .env
<span class="hljs-comment"># ... add user/pass ...</span>

<span class="hljs-comment"># 3. Run the ETL</span>
gitstatdb /<span class="hljs-keyword">path</span>/<span class="hljs-keyword">to</span>/your/repository
</code></pre>
<p>For specific analysis, you can even force the recalculation of statistics for specific branches or the whole repo via the command line.</p>
<h2 id="heading-commercial-reporting">Commercial Reporting</h2>
<p>While <code>gitstatdb</code> is open source (MIT License) and free to use, building the right SQL queries for advanced dashboards can be tricky.</p>
<p>The project includes a <code>reporting</code> directory with setup instructions for Metabase. However, the advanced template packs, complex SQL reports (like the "Authors of the Year" logic), and specific Metabase/Superset configurations are available as <strong>On-Demand Services</strong>.</p>
<p>If you want to skip the setup and jump straight to insights, you can contact me for the premium dashboard pack, which includes:</p>
<ul>
<li><p>Support for setting up the tool and the necessary crons.</p>
</li>
<li><p>Pre-configured Metabase dashboards.</p>
</li>
<li><p>Complex SQL views for Churn and Author ranking.</p>
</li>
<li><p>Support for setting up Superset (others?) visualizations.</p>
</li>
</ul>
<h2 id="heading-get-the-code">Get the Code</h2>
<p>The project is hosted on GitHub. Give it a star and start treating your code history like the valuable dataset it is.</p>
<p>👉 <a target="_blank" href="https://github.com/joebordes/gitstatdb"><strong>GitHub - joebordes/gitstatdb</strong></a></p>
]]></content:encoded></item><item><title><![CDATA[Hacktoberfest 2025]]></title><description><![CDATA[Hacktoberfest 2025 just finished — and guess what? They brought back the swag!After last year’s digital-only celebration, seeing the return of t-shirts and tree-planting rewards brought a wave of nostalgia and motivation 😊
For some reason, I didn’t ...]]></description><link>https://joebordes.com/hacktoberfest-2025</link><guid isPermaLink="true">https://joebordes.com/hacktoberfest-2025</guid><category><![CDATA[#hacktoberfest ]]></category><category><![CDATA[Open Source]]></category><dc:creator><![CDATA[Joe Bordes]]></dc:creator><pubDate>Fri, 31 Oct 2025 19:53:49 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1759232933750/161e673a-c721-43a5-8a04-217ef6ae261e.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong><em>Hacktoberfest 2025 just finished — and guess what? They brought back the swag!</em></strong><br />After last year’s digital-only celebration, seeing the return of t-shirts and tree-planting rewards brought a wave of nostalgia and motivation <strong>😊</strong></p>
<p>For some reason, I didn’t receive any notification about the event this year. I casually remembered about it at the end of September and went to see if it was still happening. Happily, I found that it was, and they had decided to return to the t-shirts and tree planting. This may seem stupid and unnecessary, but it made me happy and motivated me to participate once again.</p>
<p>My first surprise was with the registration information. They asked for more information than ever, which made me uncomfortable and suspicious of it being just another marketing scheme.</p>
<p>The next surprise was that they increased the number of commits to 6. Not that I think that is wrong, but it caught my attention.</p>
<p>In any case, this year found my mind actively looking for a reason to dedicate time to the <a target="_blank" href="https://modelcontextprotocol.io/docs/getting-started/intro">MCP protocol</a>. I had been reading and listening to talks about it for some weeks at this point, and I saw a clear connection with Evolutivo.fw (coreBOS), which was the perfect excuse to finally sit down and dive into it.</p>
<h2 id="heading-my-contributions">My Contributions</h2>
<ol>
<li><h3 id="heading-translating-tiddlywiki-to-spanish">Translating TiddlyWiki to Spanish</h3>
</li>
</ol>
<p>My first pull request is the traditional <a target="_blank" href="https://github.com/TiddlyWiki/TiddlyWiki5/pull/9311">TiddlyWiki ES translation</a>. Thanks Jeremy</p>
<ol start="2">
<li><h3 id="heading-helping-tcpdf">Helping TCPDF</h3>
</li>
</ol>
<p>Then I got a follow-up question in an issue report I made to TCPDF. They aren’t participating in Hacktoberfest, but I decided <a target="_blank" href="https://github.com/tecnickcom/TCPDF/pull/832">to help anyway</a>. Even though they aren’t part of Hacktoberfest, contributing still feels in the same spirit.</p>
<ol start="3">
<li><h3 id="heading-building-the-evolutivofw-mcp-integration">Building the Evolutivo.fw MCP Integration</h3>
</li>
</ol>
<p>Next, I started the <a target="_blank" href="https://github.com/coreBOS/MCP">coreBOS (Evolutivo.fw) MCP project</a>. Once I learned the high-level concept of MCP, I saw how it could help working with the Evolutivo application. If we could ask questions and work with the application from an intelligent chat environment, it could be a decisive step forward for the future of the project.</p>
<p>So I started studying the MCP protocol and implemented a <a target="_blank" href="https://github.com/coreBOS/MCP">complete integration for CRUD operations and more</a>. You can <a target="_blank" href="https://blog.evolutivo.it/blog/evolutivomcp">read all about it in this blog post</a>. For the Hacktoberfest participation, this project alone contributed the 6 commits I needed and many more.</p>
<ol start="4">
<li><h3 id="heading-updating-node-red-language-files">Updating Node-RED Language Files</h3>
<p> Next, I updated the <a target="_blank" href="https://github.com/node-red/node-red/pull/5299">Node-RED language files</a>, like I did last year. They accepted the pull request a few days later and were kind enough to add the <code>hacktoberfest-accepted</code> label so it counted towards my participation. Thanks!</p>
</li>
<li><h3 id="heading-translating-mautic-to-spanish">Translating Mautic to Spanish</h3>
</li>
</ol>
<p>Finally, I translated some Mautic strings, a little over 200, to ES and <a target="_blank" href="https://github.com/mautic/mautic/pull/15604">created a PR</a> with some syntax and grammar errors I found along the way.</p>
<h2 id="heading-hacktoberfest-participation">Hacktoberfest Participation</h2>
<p>As a conclusion, I can say that <strong>small contributions here and there matter</strong> and that this is another great reason to dedicate time to new projects.</p>
<p>So, another year where I can still say, with pride, that I have participated in all the challenges of <strong>Hacktoberfest</strong> that have been held. Looking forward to seeing what happens next year!</p>
]]></content:encoded></item><item><title><![CDATA[Optimizing MySQL Application Queries]]></title><description><![CDATA[For a long time, I tried to avoid the topic of database optimization. It felt like it was something somebody else could learn for me and avoid having yet another skill and learning curve, a task best delegated to more specialized employees or coworke...]]></description><link>https://joebordes.com/optimizing-mysql-application-queries</link><guid isPermaLink="true">https://joebordes.com/optimizing-mysql-application-queries</guid><category><![CDATA[mysqltuner]]></category><category><![CDATA[MySQL]]></category><category><![CDATA[optimization]]></category><category><![CDATA[query-optimization]]></category><category><![CDATA[percona]]></category><category><![CDATA[AI]]></category><dc:creator><![CDATA[Joe Bordes]]></dc:creator><pubDate>Sun, 31 Aug 2025 11:42:17 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1756591590378/0c63eca8-8c27-4ac2-af5d-ecdf642f028b.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>For a long time, I tried to avoid the topic of database optimization. It felt like it was something somebody else could learn for me and avoid having yet another skill and learning curve, a task best delegated to more specialized employees or coworkers, after all, <strong>I am a System Architect and Designer</strong>, a programmer, not a System administrator (a job which I personally enjoy). Some attempts were more successful than others, but for years, I never felt like I had someone I could count on for this.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">✅</div>
<div data-node-type="callout-text">The last person I worked with on this project did a great job and left some very useful procedures that I based this article on.</div>
</div>

<p>I insisted on staying away from it, that is, until this past week. The inevitable happened, and I was forced to face it head-on. A reseller informed us that his applications were working way too slowly. I had no idea where the problem could be, but I suspected that it lay somewhere deep within the database or (hopefully) in the server itself. I had no choice but to learn.</p>
<p>What I discovered was that it was far easier than I imagined. And, to be honest, a lot of that is thanks to <strong>generative AI, which has changed everything</strong>, here too.</p>
<p>Let's dive in.</p>
<h2 id="heading-the-tools">The Tools</h2>
<p>My newfound database optimization arsenal consists of just a few key tools. Together, they form a workflow that takes you from a vague sense of a problem to a precise, actionable solution.</p>
<ul>
<li><p><a target="_blank" href="https://github.com/major/MySQLTuner-perl"><code>mysqltuner</code></a></p>
</li>
<li><p><a target="_blank" href="https://etckeeper.branchable.com/"><code>etckeeper</code></a></p>
</li>
<li><p><a target="_blank" href="https://www.mysql.com/">MySQL</a> <code>mysqld.cnf</code></p>
</li>
<li><p><a target="_blank" href="https://www.percona.com/">Percona</a> <a target="_blank" href="https://docs.percona.com/percona-toolkit/pt-query-digest.html"><code>pt-query-digest</code></a></p>
</li>
<li><p>Your favorite generative AI</p>
</li>
</ul>
<h2 id="heading-the-workflow">The Workflow</h2>
<h3 id="heading-1-the-quick-audit-mysqltuner"><strong>1. The Quick Audit:</strong> <code>mysqltuner</code></h3>
<p><code>mysqltuner</code> is a Perl script that acts as your friendly MySQL performance auditor. You run it from the command line, and it connects to your MySQL instance to give you a quick report card. It analyzes everything from your buffer settings and thread cache hits to your key-read efficiency and a host of other metrics. It's not a diagnostic tool for a specific query, but rather a "general health check" that gives you a high-level overview of where your configuration is weak.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">The important message here is that any optimization you can make to MySQL itself will benefit all applications, so we need to read the output of mysqltuner and implement the recommendations we can.</div>
</div>

<p>In the general case of Evolutivo, and in particular in the case I was debugging this week, the (edited) output at the start was</p>
<pre><code class="lang-bash">[!!] /var/<span class="hljs-built_in">log</span>/mysql/error.log contains 48 warning(s).
[OK] /var/<span class="hljs-built_in">log</span>/mysql/error.log does not contain any error.
[OK] Maximum reached memory usage: 26.8G (45.55% of installed RAM)
[OK] Maximum possible memory usage: 28.1G (47.72% of installed RAM)
[OK] Overall possible memory usage with other process is compatible with memory available
[OK] Slow queries: 0% (22/36M)
[!!] Highest connection usage: 89%  (135/151)
[OK] Aborted connections: 0.00%  (4/176256)
[!!] name resolution is active : a reverse name resolution is made <span class="hljs-keyword">for</span> each new connection and can reduce performance
[OK] Sorts requiring temporary tables: 0% (193 temp sorts / 1M sorts)
[!!] Joins performed without indexes: 50789
[OK] Temporary tables created on disk: 0% (121 on disk / 1M total)
[OK] Thread cache hit rate: 99% (1K created / 176K connections)
[!!] Table cache hit rate: 0% (2K open / 1M opened)
[OK] Open file <span class="hljs-built_in">limit</span> used: 2% (218/10K)
[OK] Table locks acquired immediately: 99% (712K immediate / 713K locks)
[!!] Binlog cache memory access: 83.85% (225656 Memory / 269115 Total)
[OK] InnoDB File per table is activated
[OK] InnoDB buffer pool / data size: 16.0G/4.7G
[OK] Ratio InnoDB <span class="hljs-built_in">log</span> file size / InnoDB Buffer pool size: 2.0G * 2/16.0G should be equal to 25%
[OK] InnoDB buffer pool instances: 16
[--] Number of InnoDB Buffer Pool Chunk : 128 <span class="hljs-keyword">for</span> 16 Buffer Pool Instance(s)
[OK] Innodb_buffer_pool_size aligned with Innodb_buffer_pool_chunk_size &amp; Innodb_buffer_pool_instances
[OK] InnoDB Read buffer efficiency: 99.99% (3807017853 hits/ 3807246008 total)
[!!] InnoDB Write Log efficiency: 60.97% (2079430 hits/ 3410771 total)
[OK] InnoDB <span class="hljs-built_in">log</span> waits: 0.00% (0 waits / 1331341 writes)
-------- Recommendations ---------------------------------------------------------------------------
General recommendations:
    Control warning line(s) into /var/<span class="hljs-built_in">log</span>/mysql/error.log file
    Reduce or eliminate persistent connections to reduce connection usage
    Configure your accounts with ip or subnets only, <span class="hljs-keyword">then</span> update your configuration with skip-name-resolve=1
    Adjust your join queries to always utilize indexes
    Increase table_open_cache gradually to avoid file descriptor limits
    Read this before increasing table_open_cache over 64: https://bit.ly/1mi7c4C
    Read this before increasing <span class="hljs-keyword">for</span> MariaDB https://mariadb.com/kb/en/library/optimizing-table_open_cache/
    This is MyISAM only table_cache scalability problem, InnoDB not affected.
    See more details here: https://bugs.mysql.com/bug.php?id=49177
    This bug already fixed <span class="hljs-keyword">in</span> MySQL 5.7.9 and newer MySQL versions.
    Beware that open_files_limit (10000) variable 
    should be greater than table_open_cache (2000)
    Increase binlog_cache_size (Actual value: 32768)
Variables to adjust:
    max_connections (&gt; 151)
    wait_timeout (&lt; 28800)
    interactive_timeout (&lt; 28800)
    join_buffer_size (&gt; 80.0M, or always use indexes with JOINs)
    table_open_cache (&gt; 2000)
    binlog_cache_size (16.0M)
</code></pre>
<p>Reading the procedures that we have implemented as a company in the past and the output above the settings that stand out are</p>
<ul>
<li><p><strong>table_definition_cache</strong>: When a client establishes a connection to a MySQL server, the server maintains a cache of table definitions in memory so that it can quickly look up metadata about tables that are used in queries. The <code>table_definition_cache</code> variable controls the size of this cache. Evolutivo has a LOT of tables, so we need to increase this size to avoid having to go to the disk for the information.</p>
</li>
<li><p><strong>max_connections</strong>: in a server with many installations, even of other applications like WordPress, PrestaShop, Nextcloud, … which all use the MySQL server, this value has to be high</p>
</li>
<li><p><strong>innodb_redo_log_capacity</strong>: The <code>innodb_redo_log_capacity</code> setting controls the total size of the redo log files. A larger capacity allows MySQL to defer flushing changes from memory to disk, which can significantly improve write performance, especially during high-volume operations.</p>
</li>
<li><p><strong>skip-name-resolve</strong>: this avoids extra network calls, but be careful, some applications require this setting if not set to work with IP or subnets only.</p>
</li>
</ul>
<p>After setting these variables (and some other work), I ended up with this output</p>
<pre><code class="lang-bash">[OK] Maximum reached memory usage: 17.3G (29.46% of installed RAM)
[OK] Maximum possible memory usage: 36.1G (61.29% of installed RAM)
[OK] Overall possible memory usage with other process is compatible with memory available
[OK] Slow queries: 2% (19K/735K)
[OK] Highest usage of available connections: 6% (16/251)
[!!] Aborted connections: 4.03%  (295/7328)
[OK] Sorts requiring temporary tables: 0% (10 temp sorts / 46K sorts)
[!!] Joins performed without indexes: 217
[OK] Temporary tables created on disk: 0% (1 on disk / 18K total)
[OK] Thread cache hit rate: 99% (58 created / 7K connections)
[OK] Table cache hit rate: 29% (4K open / 16K opened)
[OK] Open file <span class="hljs-built_in">limit</span> used: 5% (520/10K)
[OK] Table locks acquired immediately: 99% (74K immediate / 74K locks)
[OK] Binlog cache memory access: 100.00% (12160 Memory / 12160 Total)
[OK] InnoDB File per table is activated
[OK] InnoDB buffer pool / data size: 16.0G/4.7G
[OK] Ratio InnoDB <span class="hljs-built_in">log</span> file size / InnoDB Buffer pool size: 2.0G * 2/16.0G should be equal to 25%
[OK] InnoDB buffer pool instances: 16
[--] Number of InnoDB Buffer Pool Chunk : 128 <span class="hljs-keyword">for</span> 16 Buffer Pool Instance(s)
[OK] Innodb_buffer_pool_size aligned with Innodb_buffer_pool_chunk_size &amp; Innodb_buffer_pool_instances
[OK] InnoDB Read buffer efficiency: 99.90% (44294781 hits/ 44339899 total)
[!!] InnoDB Write Log efficiency: 73.46% (154640 hits/ 210498 total)
[OK] InnoDB <span class="hljs-built_in">log</span> waits: 0.00% (0 waits / 55858 writes)
-------- Recommendations ---------------------------------------------------------------------------
General recommendations:
    Control warning line(s) into /var/<span class="hljs-built_in">log</span>/mysql/error.log file
    Control error line(s) into /var/<span class="hljs-built_in">log</span>/mysql/error.log file
    MySQL was started within the last 24 hours - recommendations may be inaccurate
    Reduce or eliminate unclosed connections and network issues
    Adjust your join queries to always utilize indexes
Variables to adjust:
    join_buffer_size (&gt; 80.0M, or always use indexes with JOINs)
</code></pre>
<p><strong>A significant improvement</strong>!</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">❗</div>
<div data-node-type="callout-text">Note that this step is mandatory for any production server using MySQL applications, and it should be repeated from time to time as things change and server usage becomes real.</div>
</div>

<p><strong><mark>But, before you do that, let’s see the other tools.</mark></strong></p>
<h3 id="heading-2-the-safety-net-etckeeper"><strong>2. The Safety Net:</strong> <code>etckeeper</code></h3>
<p>This tool isn't directly related to MySQL performance, but it's a total lifesaver for any Linux server. <code>etckeeper</code> is a collection of scripts that hooks into your package manager (<code>apt</code>, <code>dnf</code>, etc.) to automatically commit changes to the <code>/etc</code> directory into a local Git repository.</p>
<p>In our case, we'll be modifying the core MySQL configuration file, <code>mysqld.cnf</code>. Having Git track and version every single change we make is of immeasurable value. If you make a mistake, you can simply roll back. This simple safety net removes a lot of the fear and hesitation from performing server-level changes.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">This is a must-have tool!</div>
</div>

<h3 id="heading-3-the-configuration-file-mysqldcnf"><strong>3. The Configuration File:</strong> <code>mysqld.cnf</code></h3>
<p>This is the main configuration file for your MySQL server. It's typically located in <code>/etc/mysql/mysql.cnf</code> or <code>/etc/my.cnf</code>. <code>mysqltuner</code> will suggest several changes to this file, such as adjusting buffer sizes or increasing certain cache limits. You'll make these adjustments, save the file, and restart MySQL. Don't worry—<code>etckeeper</code> will be there to save you if anything goes wrong.</p>
<p>You will have to activate slow query analysis and let your users work normally for some days and repeat during a few weeks.</p>
<h3 id="heading-4-the-forensic-analysis-pt-query-digest"><strong>4. The Forensic Analysis:</strong> <code>pt-query-digest</code></h3>
<p>This is the most crucial tool in the entire workflow. <code>pt-query-digest</code> is a script from Percona Toolkit that analyzes your MySQL slow query log. It reads through millions of queries and then, like a skilled detective, presents you with a report that tells you exactly which queries are causing the most pain. It will identify the worst offenders based on total execution time, lock time, and number of rows examined. This report is the goldmine of information you need to move forward.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">Percona is an incredible company, a serious reference for anyone who needs to take their database performance to a top-notch level.</div>
</div>

<h3 id="heading-5-the-final-easy-step-generative-ai"><strong>5. The Final, Easy Step: Generative AI</strong></h3>
<p>Now comes the fun part. You have the raw data from <code>pt-query-digest</code>, which, to the untrained eye, can still be a bit overwhelming, it looks like this:</p>
<pre><code class="lang-bash"><span class="hljs-comment"># Query 11: 0.00 QPS, 0.00x concurrency, ID 0x37107AE09E34F1D59DD462CB636DEBDB at byte 1188119</span>
<span class="hljs-comment"># This item is included in the report because it matches --limit.</span>
<span class="hljs-comment"># Scores: V/M = 0.00</span>
<span class="hljs-comment"># Time range: 2025-08-30T10:32:43 to 2025-08-30T13:26:10</span>
<span class="hljs-comment"># Attribute    pct   total     min     max     avg     95%  stddev  median</span>
<span class="hljs-comment"># ============ === ======= ======= ======= ======= ======= ======= =======</span>
<span class="hljs-comment"># Count          0      44</span>
<span class="hljs-comment"># Exec time      0      2s    25ms    69ms    34ms    42ms     9ms    31ms</span>
<span class="hljs-comment"># Lock time      0   103us     1us     9us     2us     5us     1us     1us</span>
<span class="hljs-comment"># Rows sent      0      48       1       2    1.09    1.96    0.28    0.99</span>
<span class="hljs-comment"># Rows examine   8   2.25M  52.28k  52.28k  52.28k  52.28k       0  52.28k</span>
<span class="hljs-comment"># Query size     0   2.32k      54      54      54      54       0      54</span>
<span class="hljs-comment"># String:</span>
<span class="hljs-comment"># Databases    xxxxx</span>
<span class="hljs-comment"># Hosts        localhost</span>
<span class="hljs-comment"># Users        xxxxx</span>
<span class="hljs-comment"># Query_time distribution</span>
<span class="hljs-comment">#   1us</span>
<span class="hljs-comment">#  10us</span>
<span class="hljs-comment"># 100us</span>
<span class="hljs-comment">#   1ms</span>
<span class="hljs-comment">#  10ms  ################################################################</span>
<span class="hljs-comment"># 100ms</span>
<span class="hljs-comment">#    1s</span>
<span class="hljs-comment">#  10s+</span>
<span class="hljs-comment"># Tables</span>
<span class="hljs-comment">#    SHOW TABLE STATUS FROM `xxxxx` LIKE 'vtiger_email_track'\G</span>
<span class="hljs-comment">#    SHOW CREATE TABLE `xxxxx`.`vtiger_email_track`\G</span>
<span class="hljs-comment"># EXPLAIN /*!50100 PARTITIONS*/</span>
select * from vtiger_email_track <span class="hljs-built_in">where</span> mailid=<span class="hljs-string">'627858'</span>\G
</code></pre>
<p>The final step is to feed that output into your favorite generative AI tool.</p>
<p>And then... you watch.</p>
<p>The AI will tell you exactly what is wrong with the query and, more importantly, <strong>how to fix it</strong>. It will explain that a missing index on a <code>WHERE</code> clause is causing a full table scan, or that a join condition is inefficient. It will then give you the precise <code>ALTER TABLE</code> commands needed to create the necessary indexes.</p>
<p>For the output above, it gave me this response</p>
<hr />
<blockquote>
<p>This <code>pt-query-digest</code> output shows a query that, while individually fast, is highly inefficient and contributes to overall database load due to frequent, repetitive full table scans.</p>
<h3 id="heading-analysis">Analysis</h3>
<ul>
<li><p><strong>Frequent Execution</strong>: The query runs 44 times, indicating it's part of a routine operation (e.g., loading a user's mailbox view).</p>
</li>
<li><p><strong>Excessive Rows Examined</strong>: The most critical issue is that each execution of the query examines an average of <strong>52,280 rows</strong> to return only about one row. The total number of rows examined is a staggering <strong>2.25 million</strong>. This is a classic symptom of a <strong>full table scan</strong>, where the database has to read every single row in the table to find the one that matches the <code>WHERE</code> clause.</p>
</li>
<li><p><strong>Execution Time</strong>: While the average execution time is only 34 milliseconds, a full table scan for a single row is still a waste of resources. The cumulative time across all 44 executions is 2 seconds, and with more users or a larger dataset, this problem would quickly escalate.</p>
</li>
<li><p><strong>The Culprit</strong>: The <code>WHERE</code> clause <code>where mailid='627858'</code> is the source of the problem. The <code>mailid</code> column is not indexed, forcing the database to perform an inefficient full table scan.</p>
</li>
</ul>
<h3 id="heading-recommendations">Recommendations</h3>
<p>To solve this, you must add an index to the <code>mailid</code> column in the <code>vtiger_email_track</code> table. This is a simple but highly effective change.</p>
<ol>
<li><p><strong>Add an Index to</strong> <code>mailid</code>: Run the following SQL command to create an index on the <code>mailid</code> column.</p>
<pre><code class="lang-sql"> <span class="hljs-keyword">ALTER</span> <span class="hljs-keyword">TABLE</span> vtiger_email_track <span class="hljs-keyword">ADD</span> <span class="hljs-keyword">INDEX</span> <span class="hljs-string">`idx_mailid`</span> (<span class="hljs-string">`mailid`</span>);
</code></pre>
<p> This will allow MySQL to use a highly efficient B-tree index to quickly locate the specific row(s) that match the <code>mailid</code> value, reducing the <code>Rows examine</code> count from tens of thousands to a handful.</p>
</li>
</ol>
<p>By adding this index, you will dramatically reduce the load on your database, speed up this frequently run query, and improve the overall performance of the application that relies on it.</p>
</blockquote>
<hr />
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text"><strong><mark>It's that easy, that powerful!</mark></strong></div>
</div>

<p>Sincerely, it is mindblowing to have extracted that response and conclusion from the output of <code>pt-query-digest</code>. In the past, this process would have required hours of research, <code>EXPLAIN</code> plan analysis, and a deep understanding of database internals. Now, with a handful of powerful, open-source tools, a dash of generative AI, and <strong>some local brain intelligence</strong>, you can go from being intimidated by MySQL to an optimization wizard in a single afternoon.</p>
<p>As a recap, the procedure is:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1756639363693/073b3b15-226c-4736-b2ac-237adedee7f8.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-go-ahead-give-it-a-try">Go ahead, give it a try!</h2>
]]></content:encoded></item><item><title><![CDATA[The Second Is Not the First to Lose]]></title><description><![CDATA[As I wrote my previous post on finding what you love, a recurring thought kept surfacing:
“The second is the first to lose.”
I’ve always hated that phrase. It’s the kind of idea that gets repeated until we stop questioning it. But I do. I question it...]]></description><link>https://joebordes.com/the-second-is-not-the-first-to-lose</link><guid isPermaLink="true">https://joebordes.com/the-second-is-not-the-first-to-lose</guid><category><![CDATA[life]]></category><category><![CDATA[sports]]></category><category><![CDATA[success]]></category><dc:creator><![CDATA[Joe Bordes]]></dc:creator><pubDate>Sun, 06 Jul 2025 09:03:40 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/TUIOmxxW_Zg/upload/4949bbb943a63d77772b098ffd06c2c9.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>As I wrote my <a target="_blank" href="https://joebordes.com/find-what-you-love">previous post on <em>finding what you love</em></a>, a recurring thought kept surfacing:</p>
<p><strong>“The second is the first to lose.”</strong></p>
<p>I’ve always hated that phrase. It’s the kind of idea that gets repeated until we stop questioning it. But I do. <strong>I question it deeply.</strong></p>
<p>For me, it represents something broken — an outdated mindset that only one can win, that everything else is a waste. That if you’re not standing at the top, you may as well not have shown up. And that couldn’t be more wrong.</p>
<p>A few days ago, I watched the final of the <strong>EuroBasket Women’s Championship</strong>. Spain played against Belgium. Spain wasn’t supposed to be there. They weren’t the favorites. In fact, statistically, they weren’t even supposed to make it past the earlier rounds. But they fought. Hard. They played each match with grit, unity, and incredible heart — the kind of performance that defines sport at its best. They made it to the final.</p>
<p>They were on top for most of the match. It came down to the final seconds — a single basket. A heartbeat. That’s how they "lost". That is how basketball is. That is how our society is.</p>
<p>But <strong>they didn’t lose.</strong> They achieved something extraordinary. They overcame odds, inspired fans, and showed a level of excellence that deserves celebration. Not pity. Not a quiet mention between a weather forecast and the latest from Real Madrid CF.</p>
<p>And yet, the next day, barely anyone was there to greet them. Just a few family members. The headlines? “Sad defeat for Spain’s women’s basketball team.” Then silence. Then soccer (again).</p>
<p>This is where our values feel upside down.</p>
<p>In a society that claims to celebrate diversity, inclusion, and equity — especially for women, LGTBIQ+, or other underrepresented groups — we need to rethink <strong>what we reward and what we recognize</strong>. Not just in politics or policy, but in the <em>narratives we build</em>.</p>
<p>Sport isn't just about winning. It's about courage. It's about progress. It’s about doing what you love — and doing it with everything you've got, even when the scoreboard doesn't agree with you. That <em>should</em> be enough to command our attention and admiration.</p>
<p>Exactly like the professional who has found his passion and manages to support his family and the families of his coworkers, that professional who will never get invited to Stanford has a lot of merit that should be celebrated.</p>
<p>And while I’m here — a quick vote on another cultural bias: <strong>Why do we keep calling rain “bad weather”?</strong></p>
<p>Seriously. I <em>like</em> it when it rains. It makes things grow. It quiets the world. Not every cloudy day is a problem to fix. Some of them are exactly what we need.</p>
<p>Maybe that’s the point.</p>
<p>We need to redefine what we call a win. What we call <em>success</em>. What we call good or bad.</p>
<p>Because sometimes, the team that “lost” the match <em>won</em> something much more meaningful.</p>
<p>And sometimes, a little rain is exactly what reminds us to look up. 🌧️</p>
]]></content:encoded></item><item><title><![CDATA[Find what you love.]]></title><description><![CDATA[This week, I received the same message three times.

The Understandably newsletter reminded me of the speech Steve Jobs gave at Stanford. Yes, that one, again. You can access it from the newsletter.

Then, during a company meeting, the same theme was...]]></description><link>https://joebordes.com/find-what-you-love</link><guid isPermaLink="true">https://joebordes.com/find-what-you-love</guid><category><![CDATA[passion]]></category><category><![CDATA[success]]></category><category><![CDATA[work]]></category><dc:creator><![CDATA[Joe Bordes]]></dc:creator><pubDate>Sat, 05 Jul 2025 15:32:45 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/TamMbr4okv4/upload/b09733b871d6378a770b45500adeea00.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>This week, I received the same message three times.</p>
<ul>
<li><p>The <a target="_blank" href="https://www.understandably.com/p/find-what-you-love">Understandably newsletter</a> reminded me of the speech Steve Jobs gave at Stanford. <em>Yes</em>, <em>that</em> one, again. You can access it from the newsletter.</p>
</li>
<li><p>Then, during a company meeting, the same theme was pitched again—but this time dressed up in the language of emotional connection, team-building, and community, rather than as... you know, a job.</p>
</li>
<li><p>And just to really drive the point home, I randomly landed on an interview on TV while waiting for my wife, and there was <a target="_blank" href="https://es.wikipedia.org/wiki/Vanesa_Mart%C3%ADn">Vanessa Martin</a> passionately advocating for—yep—finding what you love.</p>
</li>
</ul>
<p>By the third time, I wasn’t even surprised. I have even <a target="_blank" href="https://joebordes.com/the-danger-of-finding-your-purpose-and-passion">written and spoken about this in the past</a>. I get the appeal. The idea is simple, almost seductive, powerful, as most simple ideas are. There’s an undeniable truth in the core message: <em>Find what you love, pursue it with passion, and everything else—skill, career, happiness, money—will follow.</em></p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">👉</div>
<div data-node-type="callout-text">Note: I am intentionally avoiding the term “success” here because that term is incredibly hard to measure and overloaded with social connotations.</div>
</div>

<p>And here’s the thing: I believe in it. Mostly.</p>
<p>When you care about something deeply, you’ll put in the hours. You'll keep going even when it's hard. You'll get better, and eventually, you may even be considered a professional—someone who truly knows their craft. But that money and invitations to speak at Stanford or get interviewed on TV are where I disagree.</p>
<p>Because somewhere between passion and profession, the story gets… edited.</p>
<p><strong>We’re told that money and tranquility follow automatically.</strong> That doing what you love will lead to financial security, peace of mind, and a meaningful life. That if you're not there yet, you just haven't tried hard enough or believed deeply enough.</p>
<p>Here’s where I call it out: <em>that’s not the whole truth.</em></p>
<p>Yes, some people do find both peace of mind and serenity by following their passion. But it’s never <em>just</em> passion that gets them there. There’s always something more—access, timing, luck, connections, health, privilege, family support, genetic disposition, even geography. Things they don’t mention. Things they may not even be aware of themselves.</p>
<p>To make it sound like <em>passion alone</em> is the golden ticket is—let’s be honest—<strong>a bit misleading</strong>.</p>
<p>Because I also know people who are wildly passionate and incredibly skilled. People who are true professionals in their craft. And still, they’re struggling to pay their bills, supporting families, and dealing with anxiety about the future. They work hard, not just for joy, but out of necessity.</p>
<p>And yet, they're told they're missing something. That they haven’t <em>truly</em> followed their passion. That if they had, the money and ease would’ve come.</p>
<p>No.</p>
<p>That’s not a lack of passion. That’s the real world.</p>
<p>I wish we’d start telling the full story. That following your passion is worth it for the personal growth alone. That it might lead to <em>success</em>, but it might not. And that’s okay. That doing something meaningful is still meaningful, even when it’s hard. Even when it's underpaid. Even when it doesn’t lead to the TED Talk.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">“When you do something noble and beautiful and no one notices, don't be sad. Sunrise is a beautiful spectacle, but without a doubt, the greater part of the audience is still asleep.” John Lenon</div>
</div>

<p>So yes—<strong>find what you love.</strong> But let’s stop pretending that that is all you need.</p>
<p><strong><em>"Finding what you love may help you survive the grind. But it’s not a magic spell. It’s a seed. The rest still depends on rain, soil, timing… and a bit of luck."</em></strong></p>
]]></content:encoded></item><item><title><![CDATA[We should turn off the lights more often.]]></title><description><![CDATA[It started like any other post-apocalyptic event, with no warning, no sign of what was about to happen. I was happily code-reviewing and debugging some fixes when the computer went off. Silence, not a whirl or beep to be heard. Stumped by the situati...]]></description><link>https://joebordes.com/we-should-turn-off-the-lights-more-often</link><guid isPermaLink="true">https://joebordes.com/we-should-turn-off-the-lights-more-often</guid><category><![CDATA[Outage]]></category><category><![CDATA[Spain]]></category><category><![CDATA[life]]></category><dc:creator><![CDATA[Joe Bordes]]></dc:creator><pubDate>Tue, 29 Apr 2025 17:18:36 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1745944376091/72ca4d3c-09f7-4a10-af3b-e6cfa58a346e.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>It started like any other post-apocalyptic event, with no warning, no sign of what was about to happen. I was happily code-reviewing and debugging some fixes when the computer went off. Silence, not a whirl or beep to be heard. Stumped by the situation I sat there for a moment trying to remember what I had opened and what I had lost when a message came up on the phone. A friend from a faraway point in Spain informed me that he had no light either. Strange. Then another friend, a member of the same group, from an even farther away point in Spain informed us that he had no light either, and then the rest of the team from different parts of the country confirmed the worst. This was big. Huge!</p>
<p>We discussed this for a while until the network started failing. We couldn’t contact anybody, the internet, that network that was created to support nuclear war back in the Cold War was down in a whole country, maybe even more of the world, who knows? We had no means of getting news because everything runs on electricity now! We don’t even have a radio in the house, it is all digital.</p>
<p>At this point, I got up and ran to the window expecting to see bonfires and people chasing themselves around for food. What I saw was a beautiful day. A beautiful clear blue sky with some fluffy white clouds lazily floating around. Some people out for a walk and bicycle riding. The birds singing. I could even hear the waves in the ocean, probably because the constant whirl of my computer fans and everlasting construction noise was missing.</p>
<p>Definitely, I am watching too much television. I decided to grab a book and sat down for a read while the electricity returned. In the calm that only reading can bring to your brain and probably due to a constant lack of sleep, I inevitably fell into a pleasant nap. When I woke up, much to my surprise there was still no light. I looked out the window again to see that things had escalated. Now there were more people out in the street, talking, enjoying the day, and debating about the outage among other things. Interesting.</p>
<p>It was time for lunch, so I went to prepare a salad and some canned food. We had a nice lunch in the sun, discussing as we usually <strong>didn’t</strong> do, due to the fact that we couldn’t get captured by the cell phone which was still on the table. Then my son arrived, all hyped up about not being connected and what was he supposed to do now. I dryly answered: “take off your shoes, grab the leash, and take your dog for a walk on the beach. That still works, even without electricity.” To my surprise, that is exactly what he did after lunch.</p>
<p>I spent the afternoon doing household errands and DIY tasks. Then I went for a 4km walk to my mom’s house. I saw more people out on the street than usual, and, THEY WERE TALKING TO EACHOTHER. Incredible, people had no reason to stay stuck on the computer or have their heads on the phone while rudely ignoring the person next to them. They were enjoying being together. Even I stayed at my mother’s house more than usual.</p>
<p>On my arrival, I had a nice chat with the family. The houses around mine were doing the same, you could hear a nice rumor of chit-chat as the people interacted with the people around them. It reminded me of when I was much younger, people used to sit in the street and gossip for entertainment. Yes, I am that old.</p>
<p>I went to bed early and got a good night’s sleep for a change. Nice and needed.</p>
<p>When I woke up it was back to life as normal. No catastrophe, no cataclysmic post-apocalyptic event, nature didn’t even care at all that the light had gone out for 12 hours, and WhatsApp and Telegram were buzzing as usual.</p>
<p>I am very aware that there are many many edge cases here. People trapped in elevators, hospital emergencies, people who count on that electricity to breathe in their homes, a lot of income that was not generated, and others which I am not considering, but, maybe, just maybe, <strong>we should get into the habit of turning off the lights from time to time</strong>.</p>
<p>Photo by <a target="_blank" href="https://unsplash.com/@dyuha?utm_content=creditCopyText&amp;utm_medium=referral&amp;utm_source=unsplash">Dyu - Ha</a> on <a target="_blank" href="https://unsplash.com/photos/left-human-palm-close-up-photography-nGo-UVGKAxI?utm_content=creditCopyText&amp;utm_medium=referral&amp;utm_source=unsplash">Unsplash</a></p>
]]></content:encoded></item><item><title><![CDATA[Hacktoberfest 2024]]></title><description><![CDATA[This year finds me rather demotivated. I guess the fact that there is nothing in it anymore does have an effect. I mean, it used to be something to brag about and the swag made it visible, now it is just more of what I do every month anyway. Heck, fo...]]></description><link>https://joebordes.com/hacktoberfest-2024</link><guid isPermaLink="true">https://joebordes.com/hacktoberfest-2024</guid><category><![CDATA[#hacktoberfest ]]></category><category><![CDATA[Mattermost]]></category><category><![CDATA[corebos]]></category><category><![CDATA[REST API]]></category><category><![CDATA[tiddlywiki]]></category><category><![CDATA[Node-Red]]></category><dc:creator><![CDATA[Joe Bordes]]></dc:creator><pubDate>Mon, 28 Oct 2024 20:23:23 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1730130328159/1b9cb6f6-83c7-4b95-abb1-962b88a2a9c5.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>This year finds me rather demotivated. I guess the fact that there is nothing in it anymore does have an effect. I mean, it used to be something to brag about and the swag made it visible, now it is just more of what I do every month anyway. Heck, for some reason, this year I haven’t even been pushed some offers or Holopin badges…</p>
<p>Anyway, whatever. Here is my contribution to another year of open-source greatness.</p>
<p>My first pull request is the (now) traditional <a target="_blank" href="https://tiddlywiki.com/">TiddlyWiki</a> ES translation. Thanks Jeremy</p>
<p>During September I studied <a target="_blank" href="https://www.slimframework.com/">PHP Slim Framework</a>. I found it to be a very interesting project. I like the simple but powerful approach and the ease with which you can extend and program in the framework. While I was studying the framework I immediately related it to coreBOS and thought of making a skeleton project for a “real” REST API. Thus was born my first idea for Hacktoberfest. I created the <a target="_blank" href="https://github.com/coreBOS/RESTAPI">coreBOS REST API</a> project, implemented the code for a REST API on top of the coreBOS web service API, and added the basic structure and functionality. Then, for the PR I added a <a target="_blank" href="https://github.com/coreBOS/RESTAPI/pull/1">MassDelete</a> endpoint. This project defines how coreBOS officially supports a RESTful API.</p>
<p>At that moment I received an update about <a target="_blank" href="https://nodered.org/">Node-RED</a> and decided to update the ES language files. I did that and submitted the PR which got accepted a few days later but they aren’t participating in Hacktoberfest and didn’t attend my requests to join, so it didn’t count towards that challenge. It is more than good enough for me and, as I said at the start, doing four PRs in open-source projects a month is just second nature at this point…</p>
<p>Another few weeks went by as I was thinking of just letting the whole thing pass when I was teaching someone how to do coreBOS web service queries using the <a target="_blank" href="https://github.com/tsolucio/coreBOSwsDevelopment">coreBOS Web Service Development tool</a> and I ran into an idea that has been on my mind for years. I want to be able to do ctrl-enter in the query box to submit the query for execution. This functionality has been on my mind for years but it was so “insignificant” that I just never got around to dedicating some time to it: thus was born my next PR!</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1730138024437/57c232c8-3a58-4b06-99fc-86e9c621d617.png" alt class="image--center mx-auto" /></p>
<p>For my last PR I decided to update our Mattermost integration. I updated the Mattermost plugin to the latest Golang and Mattermost versions and then updated the <a target="_blank" href="https://github.com/tsolucio/chatwithme">Chat With Me</a> extension to work with that plugin and the latest changes inside the coreBOS project. I did many commits during this update, among which I take special pride in the documentation, but I just picked a couple and marked them as hacktoberfest-accepted.</p>
<p><strong>A total of 39 commits in 9 repositories during October</strong></p>
<p>So, all in all, another year where I can still say, with pride, that I have participated in all the challenges of Hactoberfest that have been held. We’ll see what happens next year. <strong>Stay tuned!</strong></p>
]]></content:encoded></item><item><title><![CDATA[Speed Up File Compression with Pigz: Parallel GZip Implementation]]></title><description><![CDATA[I learned about pigz today. While reviewing the processes in one of my Linux servers I saw this process eating up the CPU resources and immediately thought it was some cryptocurrency mining hack. After some investigation, I found a VERY nice tool!
Fr...]]></description><link>https://joebordes.com/speed-up-file-compression-with-pigz-parallel-gzip-implementation</link><guid isPermaLink="true">https://joebordes.com/speed-up-file-compression-with-pigz-parallel-gzip-implementation</guid><category><![CDATA[Linux]]></category><category><![CDATA[compression]]></category><category><![CDATA[Devops]]></category><category><![CDATA[Backup]]></category><dc:creator><![CDATA[Joe Bordes]]></dc:creator><pubDate>Sat, 06 Jul 2024 11:19:38 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1720262836212/ee8196bb-c05d-4eb3-93b5-cb038e21be5d.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I learned about <strong>pigz</strong> today. While reviewing the processes in one of my Linux servers I saw this process eating up the CPU resources and immediately thought it was some cryptocurrency mining hack. After some investigation, I found a VERY nice tool!</p>
<p>From the manual page:</p>
<blockquote>
<p>Pigz compresses using threads to make use of multiple processors and cores. The input is broken up into 128 KB chunks with each compressed in parallel. The individual check value for each chunk is also calculated in parallel.</p>
</blockquote>
<p>So a gzip that uses all the processors in the server to do its work faster. Let's give that a try.</p>
<p>I look around the server and find a 14Gb SQL dump file of one of our databases. It's a perfect file for a compression test. So I compress and uncompress it with pigz and gzip.</p>
<pre><code class="lang-bash">time pigz cbcrm_pre_update_30052024.sql
time unpigz cbcrm_pre_update_30052024.sql.gz
time gzip cbcrm_pre_update_30052024.sql
time gunzip cbcrm_pre_update_30052024.sql.gz
</code></pre>
<p>The results:</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Command</td><td>Real</td><td>User</td><td>System</td></tr>
</thead>
<tbody>
<tr>
<td>pigz</td><td>1m54.518s</td><td>1m18.354s</td><td>0m13.380s</td></tr>
<tr>
<td>unpigz</td><td>1m38.363s</td><td>0m49.843s</td><td>0m20.917s</td></tr>
<tr>
<td>gzip</td><td>4m6.779s</td><td>3m53.000s</td><td>0m8.151s</td></tr>
<tr>
<td>gunzip</td><td>1m37.684s</td><td>0m59.503s</td><td>0m7.100s</td></tr>
</tbody>
</table>
</div><p>The time for compression has a significant difference, while the uncompression is almost similar, I suppose due to the complexities of creating meaningful chunks of compressed data and the internals of the gzip format.</p>
<p>The <code>htop</code> output shows a clear difference in server resource usage:</p>
<p>These first two images are two different moments of the <code>pigz</code> execution:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1720264135261/7a45c4e8-c34d-4ccd-b505-31b8e44f26b2.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1720264147330/b17e4ba6-c055-4762-a5dc-d4a75aa963bd.png" alt class="image--center mx-auto" /></p>
<p>where we clearly see all CPUs working together. The next two images are for the gzip execution.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1720264234767/3f7c5895-97a7-47ee-8a8a-e9e1e2537b7b.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1720264293715/b96b1e4a-3257-4676-b6aa-e4394c820840.png" alt class="image--center mx-auto" /></p>
<p>where we see the lack of parallel computing. The unzip executions look similar.</p>
<p>In the Altlantic article referenced below and in the manual you will find some useful execution options and combinations.</p>
<p>A very practical use of programming and server knowledge! <strong><em>Kudos to the pigz team.</em></strong></p>
<h2 id="heading-references">References</h2>
<ul>
<li><p><a target="_blank" href="https://zlib.net/pigz/">pigz - Parallel gzip</a></p>
</li>
<li><p><a target="_blank" href="https://github.com/madler/pigz">madler/pigz: A parallel implementation of gzip for modern multi-processor, multi-core machines.</a></p>
</li>
<li><p><a target="_blank" href="https://linux.die.net/man/1/pigz">compress/expand files - Linux man page</a></p>
</li>
<li><p><a target="_blank" href="https://www.atlantic.net/dedicated-server-hosting/how-to-install-and-use-pigz-to-compress-file-in-linux/">How to Install and Use Pigz to Compress File in Linux</a></p>
</li>
<li><p><a target="_blank" href="https://www.freepik.com/free-photo/intergalactic-travel-concept-ai-generated_133558234.htm#fromView=search&amp;page=1&amp;position=9&amp;uuid=9d1b1fe1-d45f-4c97-8572-8e36ff518d18">Image by vwalakte on Freepik</a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Node-RED... wow!]]></title><description><![CDATA[I learned about Node-RED this week. That is already a surprise to me. The product had its first commit in September 2013 by the hand of Nicholas O'Leary. I would have expected to bump into it sooner, not over 10 years later. At that time, I was using...]]></description><link>https://joebordes.com/node-red-wow</link><guid isPermaLink="true">https://joebordes.com/node-red-wow</guid><category><![CDATA[Node-Red]]></category><category><![CDATA[corebos]]></category><dc:creator><![CDATA[Joe Bordes]]></dc:creator><pubDate>Fri, 05 Jan 2024 17:26:31 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1703532386080/efa35545-8d50-4f3b-9f66-677cab9bbba0.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I learned about <a target="_blank" href="https://nodered.org/">Node-RED</a> this week. That is already a surprise to me. The product had its first commit in September 2013 by the hand of Nicholas O'Leary. I would have expected to bump into it sooner, not over 10 years later. At that time, I was using (and still am) <a target="_blank" href="https://help.hitachivantara.com/Documentation/Pentaho/Data_Integration_and_Analytics/9.1/Products/Pentaho_Data_Integration">Pentaho Data Integration</a> (aka Kettle), but I was never a big fan. I couldn't find something that would make integrations easy, and intuitive. I've tried a lot of tools in the past, from the "must-see" players like <a target="_blank" href="https://zapier.com/">Zapier</a> and <a target="_blank" href="https://www.make.com/en">Make</a> (integromat) going through some of the open source tools like <a target="_blank" href="https://n8n.io/">n8n</a>, <a target="_blank" href="https://airflow.apache.org/">Apache Airflow</a>, <a target="_blank" href="https://hop.apache.org/">Apache HOP</a>, and a bunch of others which you can find at the end of this article.</p>
<p><a target="_blank" href="https://nodered.org">Node-RED</a> defines itself as a programming tool for wiring together hardware devices, APIs, and online services in new and interesting ways. It provides a browser-based editor that makes it easy to wire together flows using the wide range of nodes in the palette that can be deployed to its runtime in a single click.</p>
<p>Internally, we seem to be pushing <a target="_blank" href="https://jitsu.com/">Jitsu</a> these days, but mostly because we have our sites on the "mass ingestion of events" not because it fits this category. <a target="_blank" href="https://www.mage.ai/">Mage</a> has caught my attention but it seems overkill for a lot of the "simple" moving information around, that we normally need (much like Jitsu does).</p>
<p>This past week I had a task to read from an <a target="_blank" href="https://kafka.apache.org/">Apache Kafka</a> queue and send an HTTP request to a <a target="_blank" href="https://www.softwareag.com/en_corporate/platform/iot/iot-analytics-platform.html">Cumulocity</a> endpoint, so I started searching for a tool where I could create a Cumulocity connector that would give me a nice environment to work in and some additional functionality.</p>
<p>After a while, I had reduced the list you can find below down to:</p>
<ul>
<li><p><a target="_blank" href="https://stackstorm.com/">StackStorm</a></p>
</li>
<li><p><a target="_blank" href="https://www.prefect.io/">Prefect</a></p>
</li>
<li><p><a target="_blank" href="https://nodered.org/">node-RED</a></p>
</li>
<li><p><a target="_blank" href="https://luigi.readthedocs.io/en/stable/">luigi</a></p>
</li>
</ul>
<p>I already had made my decision to work with <a target="_blank" href="https://www.prefect.io/">prefect</a>. It produced the best sensation over the rest, but I had to make sure so I started with the mindset of discarding the others. StackStorm and Luigi were easy, the first because it is far from the market segment of my project and the second due to the complexity. Then I set to look at Node-RED and justify why I wouldn't use it instead of prefect, and, <strong>boom! it hit me in the face. Wow!</strong></p>
<p>Incredible, how had I not seen this tool before? Easy to install, the debugging and simplicity mixed with the power is mind-blowing. I had an Apache Kafka consumer sending emails when I produced messages in a few clicks. <strong>What?!?</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1703534154126/f828fd33-c8a3-4d14-92c3-e7e33ae689ed.png" alt class="image--center mx-auto" /></p>
<p>Thousands of nodes existed so I searched for Cumulocity &gt; <a target="_blank" href="https://flows.nodered.org/node/node-red-contrib-cumulocity"><strong>it is already there !!</strong></a></p>
<p>I have to say that if, for some reason, I couldn't use Node-RED for my project I would use <strong>prefect</strong>, but I am going to use Node-RED for this one and think I will be using it a lot in the future.</p>
<p>Once I had watched the video series, browsed around the forum, and read most of the documentation. I <a target="_blank" href="https://nodered.org/docs/creating-nodes/">implemented the lowercase development example</a> node and did what I usually do when I find an open-source product I like:</p>
<ul>
<li><p><a target="_blank" href="https://github.com/node-red/node-red/pull/4495">Translate it</a> It is astonishing how much you can learn about an application by translating it. It is like going through every part of the application with notes. Much like reading the manual. The team at Node-RED has a nice policy where a translator needs a validator to review the translation before it can get accepted, so if any reader of this post can give me a hand there, it would be much appreciated.</p>
</li>
<li><p>Join the community: <a target="_blank" href="https://discourse.nodered.org/">forum</a> and Slack.</p>
</li>
<li><p>Implement an integration with <a target="_blank" href="https://corebos.org/">coreBOS</a>, obviously :-)</p>
</li>
</ul>
<h2 id="heading-corebos-integration">coreBOS Integration</h2>
<p>The experience was, again, satisfying. The documentation, examples, community support, and overall development environment were positive. I had an integration with 11 nodes in less than a day, mostly due to my lack of comfort with node.js.</p>
<p>I recorded a couple of videos to present the integration.</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://www.youtube.com/playlist?list=PL0oN2FI_W55z2-PjRHnaBFMS0ir28i9zv">https://www.youtube.com/playlist?list=PL0oN2FI_W55z2-PjRHnaBFMS0ir28i9zv</a></div>
<p> </p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1704470411422/057c8f66-843c-4f90-aad5-f06d48d08538.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-contenders">Contenders</h2>
<table><tbody><tr><td><p><strong>Name</strong></p></td><td><p><strong>License</strong></p></td><td><p><strong>Language</strong></p></td></tr><tr><td><p><a target="_blank" href="http://syndesis.io/">syndesis</a></p></td><td><p>Apache-2.0</p></td><td><p>Java</p></td></tr><tr><td><p><a target="_blank" href="https://n8n.io/">n8n</a></p></td><td><p>Not Business friendly</p></td><td><p>Javascript/Typescript</p></td></tr><tr><td><p><a target="_blank" href="https://github.com/huginn/huginn">huginn</a></p></td><td><p>MIT</p></td><td><p>Ruby</p></td></tr><tr><td><p><a target="_blank" href="https://www.activepieces.com/">activepieces</a></p></td><td><p>Not Business friendly</p></td><td><p>Javascript/Typescript</p></td></tr><tr><td><p><a target="_blank" href="https://actionsflow.github.io/">actionsflow</a></p></td><td><p>MIT</p></td><td><p>Javascript/Typescript</p></td></tr><tr><td><p><a target="_blank" href="https://automatisch.io/">automatisch</a></p></td><td><p>AGPL</p></td><td><p>Javascript/Typescript</p></td></tr><tr><td><p><a target="_blank" href="https://chainjet.io/">chainjet</a></p></td><td><p>Elastic License 2.0</p></td><td><p>Javascript/Typescript</p></td></tr><tr><td><p><a target="_blank" href="https://github.com/muesli/beehive">beehive</a></p></td><td><p>AGPL</p></td><td><p>Golang</p></td></tr><tr><td><p><a target="_blank" href="https://nodered.org/">node-red</a></p></td><td><p>Apache-2.0</p></td><td><p>Javascript/node.js</p></td></tr><tr><td><p><a target="_blank" href="https://stackstorm.com/">StackStorm</a></p></td><td><p>Apache-2.0</p></td><td><p>Python</p></td></tr><tr><td><p><a target="_blank" href="https://github.com/automaticmode/active_workflow">active_workflow</a></p></td><td><p>MIT</p></td><td><p>Ruby</p></td></tr><tr><td><p><a target="_blank" href="https://wiki.openstack.org/wiki/TaskFlow#Summary">taskflow</a></p></td><td><p>Apache-2.0</p></td><td><p>Python</p></td></tr><tr><td><p><a target="_blank" href="https://wiki.openstack.org/wiki/TaskFlow#Summary">airflow</a></p></td><td><p>Apache-2.0</p></td><td><p>Python</p></td></tr><tr><td><p><a target="_blank" href="https://spiffworkflow.readthedocs.io/en/latest/">prefect</a></p></td><td><p>Apache-2.0</p></td><td><p>Python</p></td></tr><tr><td><p><a target="_blank" href="https://spiffworkflow.readthedocs.io/en/latest/">spiffworkflow</a></p></td><td><p>LGPL</p></td><td><p>Python</p></td></tr><tr><td><p><a target="_blank" href="https://pydoit.org/">pydoit</a></p></td><td><p>MIT</p></td><td><p>Python</p></td></tr><tr><td><p><a target="_blank" href="https://github.com/tibcosoftware/flogo">flogo</a></p></td><td><p>BSD</p></td><td><p>Golang</p></td></tr><tr><td><p><a target="_blank" href="https://github.com/caronc/apprise">apprise</a></p></td><td><p>BSD</p></td><td><p>Python</p></td></tr><tr><td><p><a target="_blank" href="https://novu.co/">novu</a></p></td><td><p>MIT</p></td><td><p>Javascript/Typescript</p></td></tr><tr><td><p><a target="_blank" href="https://luigi.readthedocs.io/en/stable/">luigi</a></p></td><td><p>Apache-2.0</p></td><td><p>Python</p></td></tr><tr><td><p><a target="_blank" href="https://thingsboard.io/">thingsboard</a></p></td><td><p>Apache-2.0</p></td><td><p>Java</p></td></tr><tr><td><p><a target="_blank" href="https://www.mage.ai/">mage</a></p></td><td><p>Apache-2.0</p></td><td><p>Python</p></td></tr><tr><td><p><a target="_blank" href="https://jitsu.com/">jitsu</a></p></td><td><p>MIT</p></td><td><p>Javascript</p></td></tr></tbody></table>]]></content:encoded></item><item><title><![CDATA[Hacktoberfest 2023]]></title><description><![CDATA[A joyful and heartfelt salute to a remarkable milestone – a decade of unwavering commitment and participation in this incredible journey. It fills me with immense pride to reflect on the years gone by, where I stood shoulder to shoulder with fellow e...]]></description><link>https://joebordes.com/hacktoberfest-2023</link><guid isPermaLink="true">https://joebordes.com/hacktoberfest-2023</guid><category><![CDATA[#hacktoberfest ]]></category><dc:creator><![CDATA[Joe Bordes]]></dc:creator><pubDate>Sat, 30 Sep 2023 22:48:56 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1696100948637/ae3b2eb5-6920-4c08-b42e-4f591929b682.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>A joyful and heartfelt salute to a remarkable milestone – a decade of unwavering commitment and participation in this incredible journey. It fills me with immense pride to reflect on the years gone by, where I stood shoulder to shoulder with fellow enthusiasts, conquering challenges year after year.</p>
<p>As I pen down these thoughts, there's a tinge of nostalgia, for it's become somewhat of a tradition to sport those coveted T-shirts proudly. Yet, I find solace in the bigger picture, where our collective efforts transcend mere attire. It's about the spirit of camaraderie and the shared purpose of making a difference in the world of technology.</p>
<p>This year, I decided to take a proactive approach, meticulously preparing in advance to hit the ground running when the event kicked off. The result? Lightning-fast pull requests that not only met the deadline but also left me with ample time to contribute even more throughout the month.</p>
<p>Here's a glimpse of the projects that received my assistance:</p>
<ol>
<li><p><a target="_blank" href="https://github.com/BlackBoxVision/react-admin-extensions#ra-extensions"><strong>React-Admin Extensions Language Translation</strong></a><strong>:</strong> I took up the task of updating the Spanish language file for the incredible <a target="_blank" href="https://marmelab.com/react-admin/">React-Admin</a> project. It may be a small win, but every contribution counts.</p>
<p> <a target="_blank" href="https://github.com/BlackBoxVision/react-admin-extensions/pull/20">https://github.com/BlackBoxVision/react-admin-extensions/pull/20</a></p>
</li>
<li><p><a target="_blank" href="https://tiddlywiki.com/"><strong>Tiddlywiki Language Translation</strong></a><strong>:</strong> I've made a personal commitment to keep the Spanish translation of this invaluable tool up-to-date, ensuring that more people can benefit from its brilliance.</p>
<p> <a target="_blank" href="https://github.com/Jermolene/TiddlyWiki5/pull/7761">https://github.com/Jermolene/TiddlyWiki5/pull/7761</a>  </p>
<p> Speaking of Tiddlywiki, I can't help but share my enthusiasm for this cognitive extension. It's true that it has a steep learning curve, much like the tools I tend to gravitate towards, but the rewards it offers are immeasurable. It's like an extension of your mind, helping you organize and access information like never before.  </p>
</li>
<li><p><a target="_blank" href="https://corebos.com"><strong>coreBOS GenDoc Update</strong></a><strong>:</strong> My home. I updated the <a target="_blank" href="https://corebos.com/docs_grav/configuration-tools/gendoc">Document Generation tool</a> with the latest fixes and enhancements, with a special focus on supporting links within text and images, as well as support for denormalizing the Documents module.</p>
<p> <a target="_blank" href="https://github.com/tsolucio/corebos/pull/1558">https://github.com/tsolucio/corebos/pull/1558</a></p>
</li>
<li><p><a target="_blank" href="https://adodb.org/"><strong>ADODB Issue Fix</strong></a><strong>:</strong> While keeping an eye on the project that coreBOS relies on, I stumbled upon an issue that needed attention. Without hesitation, I jumped in to lend a helping hand, because that's what this initiative is all about – supporting each other.</p>
<p> <a target="_blank" href="https://github.com/ADOdb/ADOdb/pull/1002">https://github.com/ADOdb/ADOdb/pull/1002</a></p>
<p> <a target="_blank" href="https://github.com/ADOdb/ADOdb/issues/970">https://github.com/ADOdb/ADOdb/issues/970</a></p>
</li>
</ol>
<p>Yet another year of participating in this heartwarming initiative, where the joy of giving back to the open-source community knows no bounds. Here's to ten years of growth, learning, and making a difference together. Cheers to many more years of collaboration and achievement!</p>
<p><strong>Happy 10th Anniversary!!</strong></p>
]]></content:encoded></item><item><title><![CDATA[Dreamhack Valencia 2023]]></title><description><![CDATA[I went to Dreamhack Valencia yesterday.
I am not into gaming, besides a brief time where I spent too much time shooting with Duke Nukem 3D, Doom, Quake, and the classics before those, I have only been addicted to Plants and Zombies. I have never sat ...]]></description><link>https://joebordes.com/dreamhack-valencia-2023</link><guid isPermaLink="true">https://joebordes.com/dreamhack-valencia-2023</guid><category><![CDATA[gaming]]></category><dc:creator><![CDATA[Joe Bordes]]></dc:creator><pubDate>Mon, 10 Jul 2023 07:41:33 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1688967166342/045dddb7-b5fa-4dbe-954c-85507d534459.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I went to <a target="_blank" href="https://dreamhack.es/">Dreamhack Valencia</a> yesterday.</p>
<p>I am not into gaming, besides a brief time where I spent too much time shooting with <a target="_blank" href="https://en.wikipedia.org/wiki/Duke_Nukem_3D">Duke Nukem 3D</a>, <a target="_blank" href="https://es.wikipedia.org/wiki/Doom_(videojuego_de_1993)">Doom</a>, <a target="_blank" href="https://en.wikipedia.org/wiki/Quake_(series)">Quake</a>, and the classics before those, I have only been addicted to <a target="_blank" href="https://en.wikipedia.org/wiki/Plants_vs.Zombies(video_game)">Plants and Zombies</a>. I have never sat in front of a PlayStation and barely played with the WII, so I am probably not the best person to write about the Dreamhack.</p>
<p>That said, I do want to share here the comparison of my past visits with this one. I have been to 3 past editions of the Dreamhack in Valencia, the last one in July 2018. Yesterday was my fourth time assisting.</p>
<p>What surprised me, and the reason for this post was the monotony.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1688969841233/23bed6f7-6cf2-4de2-94b6-782380aa4fc4.png" alt class="image--center mx-auto" /></p>
<p>I found a lot <strong>less people</strong> than in past editions. This may be due to the day I went as this year I went on the last day of the event, but we barely had to stay in the queue at the few places where we could play.</p>
<p><strong>No merchandising.</strong> In the past editions, I left the event with some bags and pamphlets about all sorts of services and products. This time the only person who pushed me anything was a very friendly guy pleading with me to play his game as he only had two people in line and the basketball team looking for people to play with them. Again, this may be because it was the last day, but...</p>
<p><strong>Cosplay was awesome</strong>. There were a lot of people in the role, a lot of star wars. Lara Croft was attractive and Ahsoka was very very well done! This was nice!</p>
<p><strong>Lack of innovation</strong>. I spend a large part of my time reading articles about things that are happening in the (technology) world and trying software applications. I see advancements of all types in many fields, and, lately, with all the LLM hype it seems that the world is changing in gigantic steps. Yet, here, 5 years (!) later, it looked all the same; counter strike, Fortnite, GTA, brawlhalla, and League of Legends. Seriously? Still?</p>
<p>I mean in 2018 there were at least two stands with Virtual Reality glasses that immersed you into another world. Meta is pushing those today, yet there was only one stand and nobody was there. It was so empty that we just walked by and didn't even dare to ask.</p>
<p>And, augmented reality? Where is that? When the <a target="_blank" href="https://en.wikipedia.org/wiki/Pok%C3%A9mon_Go">Pokemon Go</a> boom happened in 2016 (!) I was convinced that AG was going to be predominant and ubiquitous. Fast forward 7 years (!) and it is nowhere to be seen. What happened? What is blocking that technology?</p>
<p>As I said at the start, I am not the right person to talk about this subject and I am, for sure, missing a big part of the picture, but it all seemed like they are living off of past work, as we do it once, get lucky and then just leave it there generating money. Not that that is bad, but it seems far from the reality I work in, where we fight hard every day for very little.</p>
<p><strong>The price was too high</strong>. Not that the 15€ was expensive, everything is expensive nowadays, but for such little content and lack of entertainment, it seems like a waste of money. We arrived at 12 and had seen everything at 14h. Had lunch, walked around again, and left. So...</p>
<p>All in all, <strong>we had a nice time</strong> and got to spend some time together, I'm glad we went.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1688974597536/1867e157-8a77-4e65-a166-b442aabfe51a.jpeg" alt class="image--center mx-auto" /></p>
]]></content:encoded></item><item><title><![CDATA[Why do I waste my time doing insignificant things with no impact?]]></title><description><![CDATA[In our fast-paced and goal-oriented world, it's not uncommon to question why we sometimes find ourselves engaging in activities that seem to have no purpose or impact. Whether it's scrolling through social media, watching mindless videos, or getting ...]]></description><link>https://joebordes.com/why-do-i-waste-my-time-doing-insignificant-things-with-no-impact</link><guid isPermaLink="true">https://joebordes.com/why-do-i-waste-my-time-doing-insignificant-things-with-no-impact</guid><category><![CDATA[life-hack]]></category><category><![CDATA[Life experiences ]]></category><category><![CDATA[motivation]]></category><category><![CDATA[get things done]]></category><category><![CDATA[procrastination]]></category><dc:creator><![CDATA[Joe Bordes]]></dc:creator><pubDate>Sat, 08 Jul 2023 15:41:38 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1688825014580/1eff017c-669a-4781-8c6a-1883b0cb32f7.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In our fast-paced and goal-oriented world, it's not uncommon to question why we sometimes find ourselves engaging in activities that seem to have no purpose or impact. Whether it's scrolling through social media, watching mindless videos, or getting lost in trivial tasks (I have <a target="_blank" href="https://github.com/tsolucio/corebos/commit/ebfe58751e3587ac3a86aa07518174ea108b921c">many</a> <a target="_blank" href="https://github.com/coreBOS/coreBOSDocumentation/commit/26d6856faeedbc39c346707593eb20598246bb61">many</a> shameful commits to testify for this), we may wonder why we engage in these behaviors that often leave us feeling worse afterward. In this blog post, I'll explore some possible reasons I fall into this trap.</p>
<h3 id="heading-self-punishment-a-vicious-cycle">Self-Punishment: A Vicious Cycle</h3>
<p>One possible explanation for engaging in insignificant activities is the tendency to punish oneself. It's not uncommon for us to feel guilty or undeserving of leisure time or relaxation, especially when we have been procrastinating for too long and have a backlog of big and urgent tasks (people waiting for us), or life interruptions keep getting in our way. Consequently, we engage in meaningless tasks as a way to subconsciously inflict punishment upon ourselves. Paradoxically, this self-punishment often results in increased feelings of dissatisfaction and unhappiness, further perpetuating the cycle. Really bad place to be.</p>
<p>The best advice I can give you for this situation is to avoid it completely, <a target="_blank" href="https://todoist.com/productivity-methods/eat-the-frog">"Eat the frog"</a>, be conscious that sooner or later that task that is bothering you, that is making your mind escape and punish you will have to be done. Go, take a walk, take time to introspect, and identify the emotions, triggers, or thought patterns that lead you towards unproductive activities, come back and do it!</p>
<h3 id="heading-procrastination-avoiding-the-essential">Procrastination: Avoiding the Essential</h3>
<p>Another common culprit behind engaging in insignificant activities is procrastination. When faced with daunting or important tasks, <strong>things we really do not want to do</strong>, our minds often seek refuge in distractions. By engaging in trivial activities, we temporarily alleviate the anxiety or pressure associated with the more significant responsibilities at hand. However, this relief is short-lived, as the underlying tasks continue to linger, contributing to a sense of guilt and unproductivity and, eventually, leading us to the punishment state.</p>
<p>Establish clear goals and priorities to give your activities a sense of purpose. Break down larger tasks into smaller, manageable steps to minimize overwhelm and procrastination. Take control of your life and face those tasks you don't want to do, sooner or later either you will have to or the importance will fade in a way you may not like it to have happened. <strong>Life will decide for you and you may not like the decision</strong>.</p>
<p>Again, calm your thoughts, concentrate, and <a target="_blank" href="https://en.wikipedia.org/wiki/Getting_Things_Done">get things done</a>!</p>
<h3 id="heading-seeking-mental-relief-and-breaks">Seeking Mental Relief and Breaks</h3>
<p>Engaging in trivial activities may also be a sign of our minds craving relaxation or a break from the demands of our daily lives. After long work hours or periods of intense focus (and loneliness), our mental faculties can become fatigued. Engaging in mindless or insignificant activities might serve as a form of escapism, allowing our minds to unwind and recharge. However, it's important to strike a balance and ensure that these breaks do not overshadow or hinder our overall productivity.</p>
<p>I have heard for years that this is fixed by time management techniques. Implement effective time management techniques, such as the <a target="_blank" href="https://en.wikipedia.org/wiki/Pomodoro_Technique">Pomodoro Technique</a>, to maintain focus and productivity. By allocating specific time slots for leisure and relaxation, you can enjoy guilt-free breaks while ensuring your essential tasks are accomplished. Personally, I have never been able to make this work for me, I just keep going until I die in the <a target="_blank" href="https://en.wikipedia.org/wiki/Boiling_frog">boiling water</a>. Furthermore, I still have to cross paths with someone who is actually doing one of these successfully, but if it works for you, great!</p>
<h3 id="heading-some-guidance">Some Guidance</h3>
<p>Recognizing and understanding the underlying causes behind our tendencies to engage in insignificant activities is the first step towards breaking free from the cycle. In my personal case, it is all about hiding from things that overwhelm me and processing the overall loneliness.</p>
<p>There have been a few things that have helped me keep my sanity during these years.</p>
<ul>
<li><p><strong>Helping people</strong>. Keeping an online activity that keeps the noise level constant during every hour of the day keeps your mind so busy that you don't have much time for anything else except eventual burnout</p>
</li>
<li><p><strong>Exercising</strong> has been a recent discovery that is very powerful, having that daily time away from the computer where your mind can drift into other areas is priceless. The feeling of power in your body is too :-)</p>
</li>
<li><p><strong>Meditating/Praying</strong>. This is something I used to do when I was much younger during a time. I am trying to get back to it now, but still have too much noise in my life to be able to appreciate the power. I see the potential, but it is a hard path to follow in today's world.</p>
</li>
<li><p><strong>Social Contact</strong>. This is similar to helping people but the reverse, they help you. For a brief moment of time in my life, I had a friend/coworker who was capable of snapping me out of the wasting time cycle. She would randomly catch me there and pull me out as well as expect explanations for the time I spent, which kept me focused. It was an interesting experience, a positive change. I have been working alone my whole life so I don't know much about the social dynamics of an office environment but I suppose your teammates may be able to produce a similar effect.</p>
</li>
</ul>
<p>Engaging in insignificant activities without impact can be a frustrating and demotivating experience. While there may be various reasons behind this behavior, it's important to recognize that you have the power to break free from these cycles. That is much easier said than done!</p>
<p>By understanding the underlying causes and implementing strategies to regain control over your time and priorities, you can shift towards a more fulfilling and purpose-driven life. Remember, every small step towards change counts, and with determination and self-compassion, you can redirect your time and energy toward activities that truly matter to you.</p>
<p><strong>Focus and take control!</strong></p>
]]></content:encoded></item><item><title><![CDATA[If we could capture all that energy...]]></title><description><![CDATA[I've been seeing this post on LinkedIn for a while now, seems to be trending. I tend to see the world from a different perspective many times. When I read these phrases or watch some television commercial (very rarely lately, but that is a subject fo...]]></description><link>https://joebordes.com/if-we-could-capture-all-that-energy</link><guid isPermaLink="true">https://joebordes.com/if-we-could-capture-all-that-energy</guid><category><![CDATA[life]]></category><category><![CDATA[Life experiences ]]></category><category><![CDATA[motivation]]></category><category><![CDATA[growth mindset,]]></category><dc:creator><![CDATA[Joe Bordes]]></dc:creator><pubDate>Thu, 29 Jun 2023 22:13:33 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1687873731417/65f7ddfc-825f-4997-a92c-e55c4a2d072f.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I've been seeing this post on LinkedIn for a while now, seems to be trending. I tend to see the world from a different perspective many times. When I read these phrases or watch some television commercial (very rarely lately, but that is a subject for another post), my mind tends to search for the hidden meaning or the other side of the story.</p>
<p>When I read this sentence my mind started racing with possibilities and cases where it just didn't feel right. I want to share two ideas that were not what the author had in mind, but before doing that let me state that I understand what they are trying to say and the importance of dedicating time to those things that are important to you, be that children, family, religion, hobbies,... whatever you decide and is "conflicting" with those extra hours. As long as you are "happy", <a target="_blank" href="https://hashnode.com/post/clj3zvjsb001f0al3bownh2nc">at least for now</a>.</p>
<p>The first idea I want to share is that I think it is unfair in many cases. Let's imagine a gynecologist, who misses his/her daughter's dance because he is bringing new life into the world, or a firefighter who is called in for an emergency or an important meeting that will bring financial stability and a better possible future for your kids,... yes, your kids don't have those hours but you are not throwing those hours away, you are making the world a better place. Even if it isn't for such "noble" causes as the previous ones, YOU ARE doing it for a better good.</p>
<p>Before writing this article I had a search and found this tweet:</p>
<p>“20 years from now, the only people who will remember that you worked late are your kids” has both a positive and a negative connotation…</p>
<p>Negative: They’ll remember you missing time with them.</p>
<p>Positive: They’ll remember your discipline, work ethic, and energy for growth.</p>
<p><a target="_blank" href="https://twitter.com/SahilBloom/status/1658809456375857153">https://twitter.com/SahilBloom/status/1658809456375857153</a></p>
<p>That is what I am trying to share, it isn't all bad, we must read carefully and question what we see. Society makes us feel bad for doing that extra effort while at the same time, it tells us to pursue our dreams and sacrifice for a better good which only a handful of people in the world actually achieve.</p>
<p>Following the line of thought above I share the second idea I had. I don't believe the sentence is true, there are going to be many people who will remember those extra hours and that extra effort to help and touch other people's lives. How many families will remember that gynecologist? How many people will remember you helped them in those extra hours to get something done? That you made that extra effort to teach them something. I bet there are going to be a lot of them. Each one of those people are going to spark an energy in their brain and body and remember the moments you touched their lives, even your kids will remember the times you WERE there.</p>
<p>If we could capture all that energy and put it all together, it would <strong>shine like a star</strong>! Like the star that you are!</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1687873907517/cb607dc9-ba5e-4f4c-8427-912c8c107536.jpeg" alt class="image--center mx-auto" /></p>
<p>Don't let these phrases change who you are, use them to become better, think, question everything, grow and <strong>SHINE</strong>!</p>
]]></content:encoded></item><item><title><![CDATA[Should we use Big O analysis? Not really!]]></title><description><![CDATA[Introduction
In the world of programming, efficient algorithms play a crucial role in determining the performance of our code. One way to analyze and predict the scalability of our code is through Big O notation. Big O provides a framework for classi...]]></description><link>https://joebordes.com/should-we-use-big-o-analysis-not-really</link><guid isPermaLink="true">https://joebordes.com/should-we-use-big-o-analysis-not-really</guid><category><![CDATA[#big o notation]]></category><category><![CDATA[corebos]]></category><category><![CDATA[Programming Blogs]]></category><category><![CDATA[algorithms]]></category><dc:creator><![CDATA[Joe Bordes]]></dc:creator><pubDate>Wed, 28 Jun 2023 07:00:39 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1687731435684/25f46089-9774-48db-aff0-86350ed63b1d.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3 id="heading-introduction">Introduction</h3>
<p>In the world of programming, efficient algorithms play a crucial role in determining the performance of our code. One way to analyze and predict the scalability of our code is through Big O notation. Big O provides a framework for classifying algorithms based on their runtime behavior as the input size increases. I will try to explain what this is and why I think we don't need to worry about it.</p>
<h3 id="heading-big-o-algorithm-analysis">Big O Algorithm Analysis</h3>
<p>Big O notation allows us to assess how code execution time scales with the size of the data it processes. In other words, it answers the question, "How does code slow down as data grows?" Ned Batchelder, a Python developer, beautifully described this concept in his PyCon 2018 talk titled "<a target="_blank" href="https://youtu.be/duvZ-2UK0fc">How Code Slows as Data Grows</a>".</p>
<p>To illustrate the significance of Big O, let's consider a scenario where we have a certain amount of work that takes an hour to complete. If the workload doubles, it might be tempting to assume that it would take twice as long. However, the actual runtime depends on the nature of the work being done.</p>
<p>For instance, reading a short book takes an hour, and reading two short books will take approximately two hours. But if we can sort alphabetically 500 books in an hour, sorting 1,000 books will likely take longer than two hours. This is because we need to find the correct place for each book in a larger collection. On the other hand, if we are simply checking if a new book fits on the shelf, the runtime remains roughly constant, regardless of the number of books.</p>
<p>The Big O notation precisely captures these trends in algorithmic performance. It provides a way to evaluate how an algorithm performs regardless of the specific hardware or programming language used. Big O notation does not rely on specific units like seconds or CPU cycles but focuses on the relative growth rate of an algorithm's runtime.</p>
<h4 id="heading-big-o-orders">Big O Orders</h4>
<p>Big O notation encompasses various orders that classify algorithms based on their scaling behavior. Here are the commonly used orders, ranging from the least to the most significant slowdown:</p>
<ol>
<li><p><strong>O(1), Constant Time</strong>: Algorithms that maintain a constant runtime, regardless of input size.</p>
</li>
<li><p><strong>O(log n), Logarithmic Time</strong>: Algorithms with runtime proportional to the logarithm of the input size.</p>
</li>
<li><p><strong>O(n), Linear Time</strong>: Algorithms with runtime proportional to the input size.</p>
</li>
<li><p><strong>O(n log n), N-Log-N Time</strong>: Algorithms with runtime proportional to the input size multiplied by the logarithm of the input size.</p>
</li>
<li><p><strong>O(n^2), Polynomial Time</strong>: Algorithms with runtime proportional to the square of the input size.</p>
</li>
<li><p><strong>O(2^n), Exponential Time</strong>: Algorithms with exponentially growing runtime as the input size increases.</p>
</li>
<li><p><strong>O(n!), Factorial Time</strong>: Algorithms with runtime proportional to the factorial of the input size (the highest order).</p>
</li>
</ol>
<p>The notation uses a capital O followed by parentheses containing a description of the order. For example, O(n) is read as "big oh of n" or "big oh n." It's important to note that there are more Big O orders beyond the ones mentioned here, but these are the most commonly encountered ones.</p>
<h3 id="heading-simplifying-big-o-orders">Simplifying Big O Orders</h3>
<p>Understanding the precise mathematical meanings of terms like logarithmic or polynomial is not essential for using Big O notation effectively. Here's a simplified explanation of the different orders:</p>
<ul>
<li><p>O(1) and O(log n) algorithms are considered fast.</p>
</li>
<li><p>O(n) and O(n log n) algorithms are reasonably efficient.</p>
</li>
<li><p>O(n^2), O(2^n), and O(n!) algorithms are slow.</p>
</li>
</ul>
<p>While there can be exceptions to these generalizations, they serve as helpful rules of thumb in most cases. Remember that Big O notation focuses on the worst-case scenario, describing how an algorithm behaves under unfavorable conditions.</p>
<p>Graphically it is also very easy to appreciate:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1687730879504/d22cf177-853e-4ccb-8c7b-ec3e4c529b87.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-some-examples">Some examples</h3>
<h4 id="heading-o1-constant-time">O(1), Constant Time</h4>
<pre><code class="lang-python"><span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">print_first_element</span>(<span class="hljs-params">lst</span>):</span>
    print(lst[<span class="hljs-number">0</span>])
</code></pre>
<p>No matter how big the list is we will always take the same amount of time to get the first element.</p>
<h4 id="heading-olog-n-logarithmic-time">O(log n), Logarithmic Time</h4>
<pre><code class="lang-python"><span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">binary_search</span>(<span class="hljs-params">arr, target</span>):</span>
    low = <span class="hljs-number">0</span>
    high = len(arr) - <span class="hljs-number">1</span>

    <span class="hljs-keyword">while</span> low &lt;= high:
        mid = (low + high) // <span class="hljs-number">2</span>
        <span class="hljs-keyword">if</span> arr[mid] == target:
            <span class="hljs-keyword">return</span> mid
        <span class="hljs-keyword">elif</span> arr[mid] &lt; target:
            low = mid + <span class="hljs-number">1</span>
        <span class="hljs-keyword">else</span>:
            high = mid - <span class="hljs-number">1</span>

    <span class="hljs-keyword">return</span> <span class="hljs-number">-1</span>
</code></pre>
<p>This code implements the binary search algorithm, which operates on a sorted list. It repeatedly divides the search space in half until it finds the target element. The number of times we can divide any list of size n in half is log2(n) (this is simply a mathematical fact that you’d be expected to know). Thus, the while loop has a big O order of O(log n), in other words, the runtime grows logarithmically with the input size.</p>
<h4 id="heading-on-linear-time">O(n), Linear Time</h4>
<pre><code class="lang-python"><span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">find_max</span>(<span class="hljs-params">arr</span>):</span>
    max_value = float(<span class="hljs-string">'-inf'</span>)
    <span class="hljs-keyword">for</span> num <span class="hljs-keyword">in</span> arr:
        <span class="hljs-keyword">if</span> num &gt; max_value:
            max_value = num
    <span class="hljs-keyword">return</span> max_value
</code></pre>
<p>This code snippet finds the maximum value in a list. It iterates through each element once, comparing it with the current maximum value. The runtime increases linearly with the input size. The more elements, the longer it takes.</p>
<h4 id="heading-on-log-n-n-log-n-time">O(n log n), N-Log-N Time</h4>
<pre><code class="lang-python"><span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">binary_search</span>(<span class="hljs-params">arr, target</span>):</span>
    low = <span class="hljs-number">0</span>
    high = len(arr) - <span class="hljs-number">1</span>
    arr.sort() <span class="hljs-comment"># we suppose this is O(n)</span>
    <span class="hljs-keyword">while</span> low &lt;= high:
        mid = (low + high) // <span class="hljs-number">2</span>
        <span class="hljs-keyword">if</span> arr[mid] == target:
            <span class="hljs-keyword">return</span> mid
        <span class="hljs-keyword">elif</span> arr[mid] &lt; target:
            low = mid + <span class="hljs-number">1</span>
        <span class="hljs-keyword">else</span>:
            high = mid - <span class="hljs-number">1</span>

    <span class="hljs-keyword">return</span> <span class="hljs-number">-1</span>
</code></pre>
<p>In this example, we have the same binary search we had above but the function first sorts the input list. If we suppose that the sort operation is linear with cost O(n), then the runtime of this version of the function grows in proportion to n log n.</p>
<h4 id="heading-on2-polynomial-time">O(n^2), Polynomial Time</h4>
<pre><code class="lang-python"><span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">bubble_sort</span>(<span class="hljs-params">arr</span>):</span>
    n = len(arr)
    <span class="hljs-keyword">for</span> i <span class="hljs-keyword">in</span> range(n):
        <span class="hljs-keyword">for</span> j <span class="hljs-keyword">in</span> range(<span class="hljs-number">0</span>, n - i - <span class="hljs-number">1</span>):
            <span class="hljs-keyword">if</span> arr[j] &gt; arr[j + <span class="hljs-number">1</span>]:
                arr[j], arr[j + <span class="hljs-number">1</span>] = arr[j + <span class="hljs-number">1</span>], arr[j]
</code></pre>
<p>This code snippet implements the bubble sort algorithm. It compares adjacent elements and swaps them if they are in the wrong order, repeatedly iterating over the list until it becomes sorted. The runtime increases quadratically with the input size.</p>
<h4 id="heading-o2n-exponential-time">O(2^n), Exponential Time</h4>
<pre><code class="lang-python"><span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">fibonacci</span>(<span class="hljs-params">n</span>):</span>
    <span class="hljs-keyword">if</span> n &lt;= <span class="hljs-number">1</span>:
        <span class="hljs-keyword">return</span> n
    <span class="hljs-keyword">return</span> fibonacci(n - <span class="hljs-number">1</span>) + fibonacci(n - <span class="hljs-number">2</span>)
</code></pre>
<p>This code calculates the Fibonacci sequence recursively. It exhibits exponential growth as the input size (n) increases because each recursive call branches into two more recursive calls, leading to a large number of redundant computations.</p>
<h4 id="heading-on-factorial-time">O(n!), Factorial Time</h4>
<pre><code class="lang-python"><span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">permute</span>(<span class="hljs-params">arr</span>):</span>
    <span class="hljs-keyword">if</span> len(arr) == <span class="hljs-number">0</span>:
        <span class="hljs-keyword">return</span> [[]]
    permutations = []
    <span class="hljs-keyword">for</span> i <span class="hljs-keyword">in</span> range(len(arr)):
        rest = arr[:i] + arr[i+<span class="hljs-number">1</span>:]
        <span class="hljs-keyword">for</span> p <span class="hljs-keyword">in</span> permute(rest):
            permutations.append([arr[i]] + p)
    <span class="hljs-keyword">return</span> permutations
</code></pre>
<p>This code generates all permutations of a given list. It uses a recursive approach, where it selects an element from the list and recursively generates permutations for the remaining elements. The number of permutations grows factorially with the input size.</p>
<p>I won't delve into further details as I believe this type of evaluation is unnecessary. Let's see why.</p>
<h3 id="heading-considering-the-size-of-n">Considering the Size of n</h3>
<p>While Big O notation is a powerful tool for analyzing algorithmic efficiency, it's important to remember that its significance is most apparent when dealing with a large amount of data. In real-world scenarios, the amount of data is often relatively small. In such cases, investing significant effort in designing sophisticated algorithms with lower Big O orders may not be necessary.</p>
<p>This is the last article in the series about optimizing and evaluating PHP applications/<a target="_blank" href="https://corebos.com">coreBOS</a> performance for large datasets. I revised my knowledge of big O notation in case we could use it to evaluate some of our bottlenecks, but I understand that even with 32 million records we are still talking about small numbers for today's processing power. Our constraints are not going to be in algorithms but in business processes.</p>
<p>The algorithms we use and understanding big O notation will be useful to make decisions, but I am aligned with Rob Pike, the designer of the Go programming language when he expressed this idea in one of his rules of programming: <strong>"Fancy algorithms are slow when 'n' is small, and 'n' is usually small"</strong>. Most developers work on everyday applications rather than massive data centers or complex computations. Profiling your code, which involves running it under a profiler, can provide more concrete insights into its performance in such contexts.</p>
<h3 id="heading-conclusion">Conclusion</h3>
<p>Understanding Big O notation and algorithm analysis empowers developers to design efficient code that can handle increasing data sizes gracefully. By classifying algorithms based on their scaling behavior, we gain insights into how different orders impact runtime. While Big O notation is most valuable when dealing with substantial data, profiling remains a practical approach for optimizing code in everyday programming tasks. By combining these techniques, developers can strike a balance between efficiency and practicality, ultimately enhancing their software's performance.</p>
<p>Be careful with <a target="_blank" href="https://accidentallyquadratic.tumblr.com/">Accidentally Quadratic</a> code and keep security in mind!</p>
]]></content:encoded></item><item><title><![CDATA[Understanding PHP Performance Measures]]></title><description><![CDATA[Welcome to the final installment of our series, where we delve into the world of XHProf measurements and discover the power of XHGUI in comprehending these values. In this article, we will explore the intricacies of the metrics obtained from XHProf a...]]></description><link>https://joebordes.com/understanding-php-performance-measures</link><guid isPermaLink="true">https://joebordes.com/understanding-php-performance-measures</guid><category><![CDATA[PHP]]></category><category><![CDATA[Performance Optimization]]></category><category><![CDATA[corebos]]></category><dc:creator><![CDATA[Joe Bordes]]></dc:creator><pubDate>Mon, 26 Jun 2023 06:00:39 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1687444578537/d1b59c52-d703-4a48-8e2a-1e1923ad1cf1.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Welcome to the final installment of our series, where we delve into the world of XHProf measurements and discover the power of XHGUI in comprehending these values. In this article, we will explore the intricacies of the metrics obtained from XHProf and uncover how leveraging XHGUI can enhance our understanding of code performance and gain deeper insights into our codebase.</p>
<h3 id="heading-xhgui-user-interface">XHGUI User Interface</h3>
<p>At the end of our previous post, we made two profile calls to an update workflow task in coreBOS and could see them in the <strong>"Recent"</strong> tab of XHGUI.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1687446631089/5025a79d-b37b-47f7-b1d5-14849f129854.png" alt="XHGUI Recent tab" class="image--center mx-auto" /></p>
<p>The view shows the server name the profile was executed in, the call method used, the URL, the time, and the measurements:</p>
<ul>
<li><p><strong>URL</strong> The URL of the request</p>
</li>
<li><p><strong>Time</strong> When the request was made</p>
</li>
<li><p><strong>wt</strong> or "Wall Time". This is the amount of time that passed during the request. It’s short for "wall clock" time, meaning the amount of time a human has to wait for the process to finish</p>
</li>
<li><p><strong>cpu</strong> The CPU time spent on this request</p>
</li>
<li><p><strong>mu</strong> Memory used for this request</p>
</li>
<li><p><strong>pmu</strong> The peak memory usage at any point during this request</p>
</li>
</ul>
<p>We can sort ascending and descending by most of those columns and the next three entries in the top menu actually only do that; sort by one of the columns:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1687446825964/948fd5c0-c0b2-41f5-aff9-2d178a89d1ff.png" alt="XHGUI Sort wall time" class="image--center mx-auto" /></p>
<p>The <strong>"Custom View"</strong> tab permits us to launch some condition-based queries to get a set of raw data out of the performance measures saved in the database. I don't see the use case of this information.</p>
<p><strong>"Watch functions"</strong> allow us to set function names or regular expression patterns that we want to see at the top of each run result page. It simply shows the information on those functions at the top of the call result page. Nothing really important but a nice way to visually separate some functions from the rest if necessary.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1687448146561/15330559-da5b-40af-b15a-635f3b140c25.png" alt="XHGUI Watch Functions" class="image--center mx-auto" /></p>
<p><strong>"Waterfall"</strong> deserves a section of its own, so let's leave that for later.</p>
<p>To get more details about a specific "run", the term used to refer to generating a response, click on the date column for the URL you are interested in. You can also click on the URL to see a list of runs and choose between them by clicking on their dates. Either way, you’ll then see a more detailed view of just this one request with a lot of information about the call.</p>
<ul>
<li><p>The top part of the screen shows the URL of the run.</p>
</li>
<li><p>The left-hand sidebar shows full details of the call and its parameters.</p>
</li>
<li><p>The main part of the screen shows some data about the top time-takers and memory-hoggers from all the various functions that got called during the run. There’s a detailed key below the graphs showing which bar relates to what.</p>
</li>
<li><p>On top of the graphs, we will see information on watched functions if there are any.</p>
</li>
<li><p>Below, we have a sorted table with all the functions called and detailed information about each one.</p>
</li>
</ul>
<p>This table shows more detailed statistics for each of the component parts of the request. We see how many times each one was called as well as the time, CPU, and memory statistics for each function executed. Both inclusive and exclusive metrics are shown; the exclusive number gives the values for just this function whereas the inclusive values are for this function and any functions that are called from inside it. We can sort by the different values, we can filter functions and we can also click on the name of any function to get more details about that particular execution.</p>
<p>Another informative (and visually appealing) feature of XHGui is the <strong>Callgraph</strong>, it shows you where the time goes in a lovely visual fashion:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1687452507549/4b5e15d5-6f53-445d-9f54-96edf346773f.jpeg" alt class="image--center mx-auto" /></p>
<p>This shows a very nice visual hierarchy of which function calls which other function and the consumption of each function by color and size. Best of all, it’s an interactive graph, you can get more information about each function by hovering over it, click on the "view symbol" link to see the details of the function call and also click on the box itself to see a highlight of the path of the calls and a left menu with the details.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1687452850458/f48a2622-b86b-449b-ae7c-f50a1f83b492.png" alt class="image--center mx-auto" /></p>
<p>Really nice!</p>
<h3 id="heading-comparing-runs">Comparing runs</h3>
<p>Now we have all the information we need to optimize the code under analysis. By studying the different functions and time consumed we can set out to optimize the code and make it faster. Try to sort the functions by exclusive CPU (descending), or by memory usage, or exclusive wall time, and have a look at what lies at the top of the list. Analyze the expensive functions and try to refactor or optimize them. Also, don’t forget to check the call count; a function that is run repeatedly will deliver improvements several times over when optimized.</p>
<p>In order to know if we are making it faster or not we would need to be able to compare the execution before and after the optimization, and that is exactly what the <strong>"Compare Runs"</strong> functionality does for us.</p>
<p>Once in a run, we will find the <strong>"Compare this run"</strong> button in the upper right corner. That will permit us to select another run and retrieve a detailed output of the differences between the two executions. The summary table shows us the old and new metrics, and also the difference in both actual numbers and percentage change. The details table then gives the change in value for all the functions. You can sort by any of the columns to find the information you are looking for.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1687453626251/a3bfa138-46a0-4a27-8ccb-56d23900cfd8.png" alt class="image--center mx-auto" /></p>
<p><mark>Red</mark> values are higher in 'new'. <mark>Green</mark> values are lower in 'new'.</p>
<h3 id="heading-waterfall-display">Waterfall Display</h3>
<p>The goal of XHGui's waterfall display is to recognize that concurrent requests can affect each other. Concurrent database requests, CPU-intensive activities, and even locks on session files can become relevant. With an Ajax-heavy application, understanding the page build is far more complex than a single load: hopefully, the waterfall can help.</p>
<p>This feature is important when analyzing live data from production installs where many users are executing the same code in parallel. It will segregate the information by IP and run to make it easier to understand each individual user execution.</p>
<h3 id="heading-closing-comments">Closing comments</h3>
<p>In the <a target="_blank" href="https://joebordes.com/benchmarking-code-with-phpbench">first post of the series</a>, I presented the PHPBench tool. This is where that tool becomes relevant because <strong>you can’t know how much you’ve improved until you start to measure your progress</strong>, which is why we must benchmark an application before proceeding with any optimizations. It’s also important to have some idea of what a realistic set of numbers should look like otherwise we may find ourselves reaching for unattainable goals. But this is also a circular relation as we need to get some idea of the bottlenecks of the application to understand which sections we need to benchmark and start optimizing.</p>
<p>Lastly, I must comment that we can use XHProf in production to analyze specific business procedures that are slow there. This extension is not intrusive and can be applied in a production install. That said, this should be done by informing the users during a specific time frame, not leaving it there forever. PHP will work faster if XHProf is not installed (marginally, but faster).</p>
<p>The goal of this series is to give you the tools you need to analyze and understand the performance of your code base and a much better insight into how it works.</p>
<p>Now we are ready to start applying these tools to our <a target="_blank" href="https://corebos.com">coreBOS</a> 32M project!</p>
<p><strong>Thanks for reading!</strong></p>
<h3 id="heading-references">References</h3>
<ul>
<li><a target="_blank" href="https://techportal84.rssing.com/chan-35760995/article5.html">Profiling PHP Applications with XHGui</a></li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Profile your PHP code]]></title><description><![CDATA[This is the last phase of the project to evaluate the recommended maximum size of data that we can manage in a coreBOS install before we get into the list of tasks of profiling and performance. This phase constructs the recommended infrastructure to ...]]></description><link>https://joebordes.com/profile-your-php-code</link><guid isPermaLink="true">https://joebordes.com/profile-your-php-code</guid><category><![CDATA[PHP]]></category><category><![CDATA[performance metrics]]></category><category><![CDATA[profiling]]></category><category><![CDATA[corebos]]></category><dc:creator><![CDATA[Joe Bordes]]></dc:creator><pubDate>Sat, 24 Jun 2023 08:00:42 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1687280777090/ca74c86c-1f5b-4451-955e-35f0a376c690.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>This is the last phase of the project to evaluate the recommended maximum size of data that we can manage in a <a target="_blank" href="https://corebos.com">coreBOS</a> install before we get into the list of tasks of profiling and performance. This phase constructs the recommended infrastructure to profile the code in search of its bottlenecks.</p>
<p>I have to admit that I was surprised to see how little information is on the internet about this topic. Despite being a predominant language on the internet and a perfectly valid, efficient, and incredible language, PHP has no hype these days 😒.</p>
<p>After reading some old posts and reviewing some GitHub projects I eventually landed on the only open-source option that we have: <a target="_blank" href="https://www.php.net/manual/en/intro.xhprof.php">XHProf</a> and <a target="_blank" href="https://github.com/perftools/xhgui">XHGUI</a></p>
<p><strong>XHProf</strong> is an official PHP extension that generates tracing measurements for your PHP code. The official definition says it keeps track of call counts and inclusive metrics for arcs in the dynamic callgraph of a program. It computes exclusive metrics in the reporting/post-processing phase, such as wall (elapsed) time, CPU time, and memory usage. A functions profile can be broken down by callers or callees.</p>
<p><strong>XHGUI</strong> is a web GUI for the XHProf PHP extension, using a database backend, and pretty graphs to make it easy to use and interpret.</p>
<p>So I started constructing our infrastructure.</p>
<h3 id="heading-xhprof">XHProf</h3>
<p>This part was very easy. XHProf is an official PHP extension so my Linux operating system just had it. I am using the exceptional <a target="_blank" href="https://techvblogs.com/blog/install-multiple-php-versions-on-ubuntu-22-04">ppa:ondrej/php</a> repository which not only gives me many different PHP versions but it just has the xhprof extensions available. So all I had to do to get it working was an</p>
<p><code>apt-get install php8.0-xhprof php8.1-xhprof php8.2-xhprof</code></p>
<p>restart apache service, and there it is:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1687330629461/2907f0b3-4c53-4fe2-94bf-064a8c64b0f3.jpeg" alt class="image--center mx-auto" /></p>
<p>Note that there is no <code>output_dir</code> configured, that variable is a default value that is not needed, as we will see next.</p>
<h3 id="heading-using-xhprof">Using XHProf</h3>
<p>Now we have to see if it is working. Following the official PHP documentation, I create this script.</p>
<pre><code class="lang-php"><span class="hljs-meta">&lt;?php</span>
xhprof_enable(XHPROF_FLAGS_CPU + XHPROF_FLAGS_MEMORY);

<span class="hljs-keyword">for</span> ($i = <span class="hljs-number">0</span>; $i &lt;= <span class="hljs-number">1000</span>; $i++) {
    $a = $i * $i;
}

$xhprof_data = xhprof_disable();
var_dump($xhprof_data);
</code></pre>
<p>which gives me this output</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1687360565550/29d97d92-2904-4c3d-b9d0-627aab1f33e4.jpeg" alt class="image--center mx-auto" /></p>
<p>These are the measurements that we need to feed into XHGUI but with some additional meta information. We will see how to do that after starting XHGUI, but if we wanted to save these measurements to be imported later, we could save that data into any directory we want, (which is why we don't care about the <code>xhprof.output_dir</code>) with some code like this</p>
<pre><code class="lang-php">$type = <span class="hljs-string">'xhproftest'</span>; <span class="hljs-comment">// a category for grouping</span>
$directory = <span class="hljs-string">'./'</span>; <span class="hljs-comment">// wherever we want</span>
$file_name = $directory.uniqid().$type.<span class="hljs-string">'.xhprof'</span>;
$file = fopen($file_name, <span class="hljs-string">'w'</span>);
<span class="hljs-keyword">if</span> ($file) {
   fwrite($file, $xhprof_data);
   fclose($file);
}
</code></pre>
<p>When activating xhprof in the call above we used the predefined constants XHPROF_FLAGS_CPU and XHPROF_FLAGS_MEMORY. These are:</p>
<ul>
<li><p><code>XHPROF_FLAGS_NO_BUILTINS</code> Used to skip all built-in (internal) functions.</p>
</li>
<li><p><code>XHPROF_FLAGS_CPU</code> Used to add <abbr>CPU</abbr> profiling information to the output.</p>
</li>
<li><p><code>XHPROF_FLAGS_MEMORY</code> Used to add memory profiling information to the output.</p>
</li>
</ul>
<p>So you can set any of those as per your requirements.</p>
<p>Now let's see how to visualize that data.</p>
<h3 id="heading-xhgui">XHGUI</h3>
<p>The instructions on the project GitHub page are clear and simple but I decided to try the docker install as it is much easier and cleaner to roll back in case I wasn't convinced. So I cloned the repository and read the docker-compose.yml file to make sure it wasn't going to do anything strange and started it.</p>
<p>The output of the docker-compose command was all correct and I could access the web user interface at http://localhost:8142</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1687381896333/ae9577ba-9c65-4939-b0cb-bede76485787.jpeg" alt class="image--center mx-auto" /></p>
<p>So, that easy, we are ready to start sending the profiling measurements there.</p>
<h3 id="heading-using-php-profiler">Using PHP Profiler</h3>
<p><a target="_blank" href="https://github.com/perftools/php-profiler">PHP Profile</a> is a PHP profiling library to submit profilings to <a target="_blank" href="https://github.com/perftools/xhgui">XHGui</a>. This library encapsulates the logic described above and gives us some high-level functions to work with profiling. This library does all the heavy lifting for us so we don't have to make calls to the database directly nor have to call the xhprof enable and disable. It gives a nice abstraction to do profiling instead of having to do low-level tasks.</p>
<p>I read the documentation and installed the library with</p>
<p><code>composer require perftools/php-profiler</code></p>
<p>Next, I read the configuration file and finally created this code which is equivalent to the first profiling code we had above.</p>
<pre><code class="lang-php"><span class="hljs-meta">&lt;?php</span>
<span class="hljs-keyword">require_once</span> <span class="hljs-string">'vendor/autoload.php'</span>;
<span class="hljs-keyword">use</span> <span class="hljs-title">Xhgui</span>\<span class="hljs-title">Profiler</span>\<span class="hljs-title">Profiler</span>;
<span class="hljs-keyword">use</span> <span class="hljs-title">Xhgui</span>\<span class="hljs-title">Profiler</span>\<span class="hljs-title">ProfilingFlags</span>;

$config = <span class="hljs-keyword">array</span>(
    <span class="hljs-comment">// This allows to configure, what profiling data to capture</span>
    <span class="hljs-string">'profiler.flags'</span> =&gt; <span class="hljs-keyword">array</span>(
        ProfilingFlags::CPU,
        ProfilingFlags::MEMORY,
        ProfilingFlags::NO_BUILTINS,
        ProfilingFlags::NO_SPANS,
    ),

    <span class="hljs-comment">// Saver to use.</span>
    <span class="hljs-string">'save.handler'</span> =&gt; Profiler::SAVER_UPLOAD,

    <span class="hljs-string">'save.handler.upload'</span> =&gt; <span class="hljs-keyword">array</span>(
        <span class="hljs-string">'url'</span> =&gt; <span class="hljs-string">'http://localhost:8142/run/import'</span>,
        <span class="hljs-comment">// The timeout option is in seconds and defaults to 3 if unspecified.</span>
        <span class="hljs-string">'timeout'</span> =&gt; <span class="hljs-number">3</span>,
        <span class="hljs-comment">// the token must match 'upload.token' config in XHGui</span>
        <span class="hljs-string">'token'</span> =&gt; <span class="hljs-string">'token'</span>,
    ),
);

$profiler = <span class="hljs-keyword">new</span> Profiler($config);
$profiler-&gt;enable([]);

<span class="hljs-keyword">for</span> ($i = <span class="hljs-number">0</span>; $i &lt;= <span class="hljs-number">1000</span>; $i++) {
    $a = $i * $i;
}

$profiler_data = $profiler-&gt;disable();
$profiler-&gt;save($profiler_data);
var_dump($profiler_data);
</code></pre>
<p>Let's go over that.</p>
<ul>
<li><p>I load the library and namespaces</p>
</li>
<li><p>I create a configuration array variable.</p>
<ul>
<li><p>I set the equivalent flags that we had in our first script</p>
</li>
<li><p>I define where the library should save the measurements (SAVER_UPLOAD). This requires some attention. XHGUI, which we started with docker, supports the endpoint <code>run/import</code> so we can call that from the PHP Profiler library, but both applications MUST share a secret token. I had to stop the docker-compose and add the <code>XHGUI_UPLOAD_TOKEN</code> environment variable with the value <code>token</code></p>
</li>
</ul>
</li>
<li><p>I initialize the profiler library with the configuration</p>
</li>
<li><p>I start and end profiling</p>
</li>
<li><p>I save the data in the XHGUI import endpoint which ends up in the MongoDB database</p>
</li>
<li><p>I dump the data and see that it is richer than the first test we did</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1687386112340/9d761fae-bffa-4822-9b9c-46971a567956.jpeg" alt class="image--center mx-auto" /></p>
<p>I run that twice and reload the XHGUI web interface:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1687386345220/afc4c0de-f426-4bc6-9959-e10d07ffbc79.jpeg" alt class="image--center mx-auto" /></p>
<p>This is the change I made in the docker-compose file</p>
<pre><code class="lang-diff">diff --git a/docker-compose.yml b/docker-compose.yml
index e5efbd6..3132828 100644
<span class="hljs-comment">--- a/docker-compose.yml</span>
<span class="hljs-comment">+++ b/docker-compose.yml</span>
@@ -11,6 +11,7 @@ services:
     environment:
       - XHGUI_MONGO_HOSTNAME=mongo
       - XHGUI_MONGO_DATABASE=xhprof
<span class="hljs-addition">+      - XHGUI_UPLOAD_TOKEN=token</span>
     ports:
       - "8142:80"
</code></pre>
<p><strong>Convinced!</strong></p>
<h3 id="heading-corebos">coreBOS</h3>
<p>With this knowledge, adding profiling to coreBOS is simple.</p>
<ul>
<li><p>I use composer to add the PHP Profile library.</p>
</li>
<li><p>I create a script to load the configuration and the object in <code>build/ProfilerConfig.php</code> which looks like this</p>
</li>
</ul>
<pre><code class="lang-php"><span class="hljs-meta">&lt;?php</span>
<span class="hljs-keyword">require_once</span> <span class="hljs-string">'vendor/autoload.php'</span>;
<span class="hljs-keyword">use</span> <span class="hljs-title">Xhgui</span>\<span class="hljs-title">Profiler</span>\<span class="hljs-title">Profiler</span>;
<span class="hljs-keyword">use</span> <span class="hljs-title">Xhgui</span>\<span class="hljs-title">Profiler</span>\<span class="hljs-title">ProfilingFlags</span>;

$ProfileConfig = <span class="hljs-keyword">array</span>(
    <span class="hljs-comment">// This allows to configure, what profiling data to capture</span>
    <span class="hljs-string">'profiler.flags'</span> =&gt; <span class="hljs-keyword">array</span>(
        ProfilingFlags::CPU,
        ProfilingFlags::MEMORY,
        ProfilingFlags::NO_BUILTINS,
        ProfilingFlags::NO_SPANS,
    ),

    <span class="hljs-comment">// Saver to use.</span>
    <span class="hljs-string">'save.handler'</span> =&gt; \Xhgui\Profiler\Profiler::SAVER_UPLOAD,

    <span class="hljs-string">'save.handler.upload'</span> =&gt; <span class="hljs-keyword">array</span>(
        <span class="hljs-string">'url'</span> =&gt; <span class="hljs-string">'http://localhost:8142/run/import'</span>,
        <span class="hljs-comment">// The timeout option is in seconds and defaults to 3 if unspecified.</span>
        <span class="hljs-string">'timeout'</span> =&gt; <span class="hljs-number">3</span>,
        <span class="hljs-comment">// the token must match 'upload.token' config in XHGui</span>
        <span class="hljs-string">'token'</span> =&gt; <span class="hljs-string">'token'</span>,
    ),
);
$profiler = <span class="hljs-keyword">new</span> Profiler($ProfileConfig);
</code></pre>
<p>Obviously, you have to set the configuration to your environment.</p>
<p>Now we can include this script and profile any code we need with just three lines of code. I tested that with the workflow Update task. I added the three lines to the <code>modules/com_vtiger_workflow/tasks/VTUpdateFieldsTask.inc</code> script; two at the start of the <code>doTask()</code> method, to include the script above and start profiling, and the last line at the end of the method to stop profiling and save.</p>
<pre><code class="lang-diff">diff --git a/modules/com_vtiger_workflow/tasks/VTUpdateFieldsTask.inc b/modules/com_vtiger_workflow/tasks/VTUpdateFieldsTask.inc
index ed6a16e54..bdb30a59d 100644
<span class="hljs-comment">--- a/modules/com_vtiger_workflow/tasks/VTUpdateFieldsTask.inc</span>
<span class="hljs-comment">+++ b/modules/com_vtiger_workflow/tasks/VTUpdateFieldsTask.inc</span>
@@ -29,6 +29,8 @@ class VTUpdateFieldsTask extends VTTask {

        public function doTask(&amp;$entity) {
                global $adb, $current_user, $logbg, $from_wf, $currentModule;
<span class="hljs-addition">+               include 'build/ProfileConfig.php';</span>
<span class="hljs-addition">+               $profiler-&gt;enable([]);</span>
                $logbg-&gt;debug('&gt; UpdateFieldsTask');
                $from_wf = true;
                $util = new VTWorkflowUtils();
@@ -183,6 +185,7 @@ class VTUpdateFieldsTask extends VTTask {
                $util-&gt;revertUser();
                $from_wf = false;
                $logbg-&gt;debug('&lt; UpdateFieldsTask');
<span class="hljs-addition">+               $profiler-&gt;save($profiler-&gt;disable());</span>
        }
 }
 ?&gt;
</code></pre>
<p>So we have to include the profiling configuration and start profiling before the code we want to measure and stop profiling and save after it:</p>
<pre><code class="lang-php"><span class="hljs-comment">// ...</span>
<span class="hljs-keyword">include</span> <span class="hljs-string">'build/ProfileConfig.php'</span>;
$profiler-&gt;enable([]);
<span class="hljs-comment">// code we want to measure</span>
$profiler-&gt;save($profiler-&gt;disable());
<span class="hljs-comment">// ...</span>
</code></pre>
<p>Now we are ready to start measuring the performance of coreBOS</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1687431331214/e3970e2d-550a-4c19-bf86-9312df42e0f0.jpeg" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1687431397836/b8424d32-a91b-4e14-9179-83846d196aeb.jpeg" alt class="image--center mx-auto" /></p>
<h3 id="heading-summary">Summary</h3>
<ul>
<li><p>I understand better now why there isn't much information out there, it is really easy to get this working and there aren't any other options except the paid ones</p>
</li>
<li><p>we need three tools working together to do profiling: XHProf, PHP Profile, and XHGUI</p>
</li>
<li><p>getting them to work together is relatively easy</p>
</li>
<li><p>I will do another post explaining how to use the XHGUI information we are saving</p>
</li>
</ul>
<p><strong>Thanks for reading.</strong></p>
<h3 id="heading-references">References</h3>
<ul>
<li><p><a target="_blank" href="https://corebos.com">coreBOS</a></p>
</li>
<li><p><a target="_blank" href="https://www.php.net/manual/en/intro.xhprof.php">XHProf</a></p>
</li>
<li><p><a target="_blank" href="https://github.com/perftools/xhgui">XHGUI</a></p>
</li>
<li><p><a target="_blank" href="https://github.com/perftools/php-profiler">PHP Profile</a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Benchmarking code with PHPBench]]></title><description><![CDATA[As I continue to work on the project to evaluate the recommended maximum size of data that we can manage in a coreBOS install. I started the next steps of constructing a profiling and performance infrastructure for the project so we can analyze objec...]]></description><link>https://joebordes.com/benchmarking-code-with-phpbench</link><guid isPermaLink="true">https://joebordes.com/benchmarking-code-with-phpbench</guid><category><![CDATA[PHP]]></category><category><![CDATA[Benchmark]]></category><category><![CDATA[#MeasurementAndAnalytics]]></category><category><![CDATA[corebos]]></category><dc:creator><![CDATA[Joe Bordes]]></dc:creator><pubDate>Wed, 21 Jun 2023 23:09:25 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1687248543540/ee624ca0-5c33-4496-8709-5230e3b7b23e.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>As I continue to work on the project to evaluate the recommended maximum size of data that we can manage in a <a target="_blank" href="https://corebos.com">coreBOS</a> install. I started the next steps of constructing a profiling and performance infrastructure for the project so we can analyze objectively how the application performs with a database of 32 million records and 5000 users.</p>
<p>Studying the different options, I found <a target="_blank" href="https://github.com/phpbench/phpbench">PHPBench</a>, a benchmark runner for PHP analogous to PHPUnit but for performance, and decided to give it a try.</p>
<p>The <a target="_blank" href="https://phpbench.readthedocs.io/">documentation</a> is spot-on, clear, direct, and just what you need. Getting the tool configured and working is a breeze.</p>
<p>The set of options and functionality is amazing and simple. It just works.</p>
<p>After reading the documentation I decided to follow the same approach coreBOS already has with the <a target="_blank" href="https://github.com/tsolucio/coreBOSTests">unit test project</a>, because it is a separate project that you can apply on any coreBOS install that needs to do the unit tests and I understand that we will be doing performance benchmarking also only on a handful of installs.</p>
<p>So I created a <a target="_blank" href="https://github.com/coreBOS/Benchmarks.git">separate repository</a> and downloaded the PHPBench phar file into it.</p>
<pre><code class="lang-bash">curl -Lo phpbench.phar https://github.com/phpbench/phpbench/releases/latest/download/phpbench.phar
</code></pre>
<p>I validated it as per the instructions and pushed it to the repository.</p>
<p>Now we can start benchmarking some code.</p>
<p>The first challenge is to load the coreBOS infrastructure so that we can access all the awesome functionality. Turns out that the test project also had that problem and solved it by creating a generic "load everything" script. So I just copied that one from there and included it at the top of the benchmark script.</p>
<p>I followed the <code>Quick Start</code> instructions and create this script that executes two functions, 5 times in batches of 1000 runs.</p>
<pre><code class="lang-php"><span class="hljs-meta">&lt;?php</span>
<span class="hljs-keyword">include_once</span> <span class="hljs-string">'build/evBench/loadcorebos.php'</span>;

<span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">CommonUtilsBench</span> </span>{

    <span class="hljs-comment">/**
    * <span class="hljs-doctag">@Revs</span>(1000)
    * <span class="hljs-doctag">@Iterations</span>(5)
    */</span>
    <span class="hljs-keyword">public</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">benchgetCurrencyName</span>(<span class="hljs-params"></span>) </span>{
        getCurrencyName(<span class="hljs-number">1</span>, <span class="hljs-literal">true</span>);
    }

    <span class="hljs-comment">/**
    * <span class="hljs-doctag">@Revs</span>(1000)
    * <span class="hljs-doctag">@Iterations</span>(5)
    */</span>
    <span class="hljs-keyword">public</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">benchgpopup_from_html</span>(<span class="hljs-params"></span>) </span>{
        popup_from_html(<span class="hljs-string">'$string'</span>, <span class="hljs-literal">true</span>);
    }
}
</code></pre>
<p>I ran the benchmarks with this command</p>
<pre><code class="lang-bash">build/evBench/phpbench.phar run build/evBench/include/utils/CommonUtils.php --report=aggregate
</code></pre>
<p>and received this output</p>
<pre><code class="lang-bash">PHPBench (1.2.10) running benchmarks... <span class="hljs-comment">#standwithukraine</span>
with PHP version 8.2.7, xdebug ✔, opcache ❌

\CommonUtilsBench

    benchgetCurrencyName....................I4 - Mo1.801μs (±9.46%)
    benchgpopup_from_html...................I4 - Mo1.729μs (±1.13%)

Subjects: 2, Assertions: 0, Failures: 0, Errors: 0
+------------------+-----------------------+-----+------+-----+----------+---------+--------+
| benchmark        | subject               | <span class="hljs-built_in">set</span> | revs | its | mem_peak | mode    | rstdev |
+------------------+-----------------------+-----+------+-----+----------+---------+--------+
| CommonUtilsBench | benchgetCurrencyName  |     | 1000 | 5   | 21.985mb | 1.801μs | ±9.46% |
| CommonUtilsBench | benchgpopup_from_html |     | 1000 | 5   | 21.985mb | 1.729μs | ±1.13% |
+------------------+-----------------------+-----+------+-----+----------+---------+--------+
</code></pre>
<p><strong>That simple! Really, really nice!</strong></p>
<h3 id="heading-assertions">Assertions</h3>
<p>Another awesome feature that I found was <a target="_blank" href="https://phpbench.readthedocs.io/en/latest/annotributes.html#assertions">Assertions</a>. You can annotate your PHPBench scripts with a powerful expression language to validate the timing of your functions and then add these benchmarks to your CI/CD process to detect variations in the execution time of the critical functions of your code base.</p>
<p>I tried that by adding this validation to the two functions in the script above.</p>
<p><code>* @Assert("mode(variant.time.avg) &lt; 200 ms")</code></p>
<p>and received this output</p>
<pre><code class="lang-bash">    benchgetCurrencyName....................I4 ✔ Mo1.826μs (±2.06%)
    benchgpopup_from_html...................I4 ✔ Mo1.757μs (±8.47%)

Subjects: 2, Assertions: 2, Failures: 0, Errors: 0
+------------------+-----------------------+-----+------+-----+----------+---------+--------+
| benchmark        | subject               | <span class="hljs-built_in">set</span> | revs | its | mem_peak | mode    | rstdev |
+------------------+-----------------------+-----+------+-----+----------+---------+--------+
| CommonUtilsBench | benchgetCurrencyName  |     | 1000 | 5   | 21.985mb | 1.826μs | ±2.06% |
| CommonUtilsBench | benchgpopup_from_html |     | 1000 | 5   | 21.985mb | 1.757μs | ±8.47% |
+------------------+-----------------------+-----+------+-----+----------+---------+--------+
</code></pre>
<p>Notice the new (green) line above the table</p>
<p><code>Subjects: 2, Assertions: 2, Failures: 0, Errors: 0</code></p>
<p>Next, I tried to force an error with this assertion</p>
<p><code>* @Assert("mode(variant.time.avg) &lt; 1 microsecond")</code></p>
<p>and got this response</p>
<pre><code class="lang-bash">    benchgetCurrencyName....................I4 ✘ Mo1.791μs (±2.95%)
    benchgpopup_from_html...................I4 ✘ Mo1.726μs (±24.63%)

2 variants failed:

  ✘ \CommonUtilsBench::benchgetCurrencyName <span class="hljs-comment"># </span>

    1) mode(variant[time][avg]) &lt; 1 microsecond
       = 1.791373776908 &lt; 1 microsecond
       = <span class="hljs-literal">false</span>

  ✘ \CommonUtilsBench::benchgpopup_from_html <span class="hljs-comment"># </span>

    1) mode(variant[time][avg]) &lt; 1 microsecond
       = 1.726025440313 &lt; 1 microsecond
       = <span class="hljs-literal">false</span>

Subjects: 2, Assertions: 2, Failures: 2, Errors: 0
+------------------+-----------------------+-----+------+-----+----------+---------+---------+
| benchmark        | subject               | <span class="hljs-built_in">set</span> | revs | its | mem_peak | mode    | rstdev  |
+------------------+-----------------------+-----+------+-----+----------+---------+---------+
| CommonUtilsBench | benchgetCurrencyName  |     | 1000 | 5   | 21.986mb | 1.791μs | ±2.95%  |
| CommonUtilsBench | benchgpopup_from_html |     | 1000 | 5   | 21.985mb | 1.726μs | ±24.63% |
+------------------+-----------------------+-----+------+-----+----------+---------+---------+
</code></pre>
<p><strong>Simple and effective!</strong></p>
<p>This is what the final code looked like</p>
<pre><code class="lang-php">    <span class="hljs-comment">/**
    * <span class="hljs-doctag">@Revs</span>(1000)
    * <span class="hljs-doctag">@Iterations</span>(5)
    * <span class="hljs-doctag">@Assert</span>("mode(variant.time.avg) &lt; 20 ms")
    */</span>
    <span class="hljs-keyword">public</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">benchgetCurrencyName</span>(<span class="hljs-params"></span>) </span>{
        getCurrencyName(<span class="hljs-number">1</span>, <span class="hljs-literal">true</span>);
    }
</code></pre>
<h3 id="heading-hard-part">Hard Part</h3>
<p>Now we have the infrastructure to measure the performance of functions and processes inside the coreBOS application, but we have to continue asking ourselves, "Which are the methods and processes that we should measure?", "Where are the bottlenecks of our code?", "What code do we have to keep under control?"</p>
<p>Stay tuned as I continue to create the infrastructure to answer those questions.</p>
<p>Thanks for reading.</p>
<h3 id="heading-references">References</h3>
<ul>
<li><p><a target="_blank" href="https://corebos.com">coreBOS</a></p>
</li>
<li><p><a target="_blank" href="https://github.com/phpbench/phpbench">PHPBench</a></p>
</li>
<li><p><a target="_blank" href="https://phpbench.readthedocs.io/">PHPBench Documentation</a></p>
</li>
<li><p><a target="_blank" href="https://github.com/coreBOS/Benchmarks.git">coreBOS Benchmark repository</a></p>
</li>
</ul>
]]></content:encoded></item></channel></rss>