Sticky post

Enterprise Shield on Dinosaur Hardware

There’s a certain kind of satisfaction that comes from taking something old and making it do something remarkable. This is the story of how a 2008 MacBook 13” aluminum — a machine that predates the iPhone App Store — ended up running a multi-threaded, self-healing, boot-persistent IP threat blocking system protecting a production web server on Ubuntu 24.04. It took a full day of iterative development, a fair amount of debugging, and one very honest conversation about an 18-year-old piece of hardware.


The Starting Point

The project began with a script called Enterprise Shield v11.4. On paper it did what it promised: it blocked traffic from hostile Autonomous System Numbers (ASNs) and geographic regions by maintaining a massive ipset of known-bad IP ranges, then dropping packets matching that set at the firewall level. In practice, it was held together with duct tape.

The first code review found problems at every layer. There was a truncated grep statement in the country block loop — a literal syntax error that prevented the script from ever completing. The leading-zero stripping logic for CIDR normalisation ran in the wrong order, cleaning data after the validation regex had already rejected it. The script injected custom iptables rules directly while also running ufw --force reset, meaning UFW silently wiped those rules on every reload. And perhaps most practically damaging: it fetched IP data for every ASN serially, sleeping two seconds between each query, making a large blocklist a multi-hour operation.

The objective was clear: fix everything, make it fast, make it resilient, and make it understand its own hardware.


Understanding the Hardware

Before optimising anything, we needed to understand what we were working with. The machine is a 2008 MacBook with a Core 2 Duo processor — a 64-bit dual-core chip from the era when 4GB of RAM was considered ambitious. This one has been upgraded to 8GB, which turned out to matter significantly for one specific decision later.

The Core 2 Duo changes the calculus on parallelism. Modern CPUs handle process spawning cheaply. On a processor from 2008, every subprocess fork is measurably expensive, and context switching between background jobs has real overhead. This shaped nearly every optimisation decision that followed: eliminate unnecessary subprocess forks, use bash builtins instead of external binaries wherever possible, and be conservative with thread counts.

It also runs Ubuntu Server 24.04, which introduced a subtle wrinkle: the system ships with iptables-nft, a compatibility shim that translates iptables commands into nftables rules. Early in the project we suspected this would break the ipset integration — specifically the --match-set rule that does the actual packet dropping. A quick check of the live chain output confirmed it was working:

93  5448 DROP  ...  match-set blocked_asns src

Those 93 drops told us the integration was solid. We moved on.


Phase 1: Making It Correct

The first rewrite — v11.5 — focused entirely on correctness before touching performance.

The truncated grep was fixed. The UFW/iptables conflict was documented and mitigated by injecting the ipset DROP rule into /etc/ufw/before.rules, making it survive UFW reloads. The leading-zero stripping was reordered so it ran before validation, not after. The ipset restore file was given a flush directive so stale entries from previous partial runs couldn’t accumulate. The country feed fetches were given --fail flags so 404 error pages didn’t silently pass through as IP data.

Most importantly: the script was given a proper trap ... EXIT so temp files were always cleaned up, the root check was moved to the absolute first line, and every (( counter++ )) was replaced with counter=$(( counter + 1 )) — because in bash, arithmetic that evaluates to zero returns exit code 1, which set -e interprets as a fatal error.


Phase 2: Making It Fast

With a correct foundation, the next challenge was the whois lookup bottleneck. The serial version queried RADB one ASN at a time with a two-second sleep between each. With 152 ASNs in the blocklist, that’s over five minutes of wall clock time before any actual data processing begins.

The first parallel version — v11.6 — used export -f to pass a bash worker function into xargs -P subshells. It looked right. It wasn’t. On many systems, xargs subshells don’t reliably inherit exported bash functions. Workers spawned successfully, registered their completion files, and wrote nothing. The blocklist came back at roughly one-third of its expected size. The failure was completely silent.

The fix was architectural. Instead of relying on function inheritance, the worker logic was written to a self-contained bash script at runtime — /tmp/shield_whois_worker.sh — and each background job executed that file directly. No inheritance, no environment dependencies, no silent failures.

The second parallel problem was subtler: all threads were hitting RADB simultaneously, triggering connection throttling that caused empty responses with no error code. RADB doesn’t say “rate limited.” It just stops returning data. The solution was per-worker random jitter (0–2.5 seconds) combined with inter-batch pausing — every 20 dispatches, all active workers drain and a 3-second pause lets RADB’s connection count settle before the next batch opens.

The final thread count settled at 4. Eight threads was causing the silent data loss. Four threads with batching gives full coverage with no throttling, and on a Core 2 Duo the overhead of managing 4 concurrent background jobs is well within budget.


Phase 3: Making It Resilient

A firewall system that runs once nightly creates a specific failure mode: if something goes wrong with a data source — RADB is slow, a country feed returns an error, the network hiccups — the next scheduled run could silently shrink the blocklist without anyone noticing.

The delta check was the answer. After every run, the entry count is written to /var/lib/shield/last_entry_count. The following night, before committing the new ruleset, the script compares. If the new count is more than 10% below the previous run, the atomic swap is aborted entirely — the existing live ipset is preserved untouched — and an alert is written to a separate log file.

“Atomic swap” is the key phrase here. The shield script never modifies the live ipset directly. It builds a complete replacement set in /tmp, populates it, then executes ipset swap blocked_asns-temp blocked_asns — a single kernel operation that is instantaneous and never leaves the firewall in a partially-updated state. The machine is always either running the old ruleset or the new one. There is no window where it’s running neither.


Phase 4: Surviving Reboots

This is where the project surfaced its most interesting architectural gap.

The ipset kernel module stores its data entirely in memory. Every reboot wipes it. The script saves a snapshot to /etc/ipset.conf after each run, but nothing was loading that snapshot back on boot. The result: after every reboot, the machine came up with an empty blocked_asns set. UFW loaded its rules, including the DROP rule that referenced blocked_asns — but the set it referenced didn’t exist. Traffic flowed freely until 2AM when the cron job fired.

The fix required two systemd services with precise ordering:

shield-ipset-restore.service   (Before ufw.service)
    └── ufw.service
          └── shield-iptables-restore.service  (After ufw.service)

The ipset service runs before UFW and loads the saved set. The iptables service runs after UFW and rebuilds the custom SHIELD-LOGIC iptables chain using iptables-restore --noflush, which merges the saved rules into UFW’s ruleset without disturbing UFW’s own chains.

Both services include first-boot guards: if their respective state files don’t exist yet (fresh install before the first cron run), they exit cleanly rather than failing and potentially delaying UFW startup.

After the first reboot with both services running, verification was clean:

Active: active (exited)   ← correct for a oneshot service
status=0/SUCCESS
shield-ipset-restore: blocklist restored from /etc/ipset.conf

Phase 5: The Operational Tooling

A blocking system is only as useful as its ability to respond to threats that aren’t in the scheduled blocklist. The companion tool — block_asn.sh — evolved through five versions across the session.

The original script had several problems: it saved to the wrong path (meaning penalty box entries vanished on reboot), it validated IP addresses with a pattern that accepted octets above 255, and it made one kernel call per route which was painfully slow for large ASNs.

The rewrite introduced two distinct modes:

Penalty box — adds ASN routes directly to the live ipset. No file writes. Effective immediately. Cleared automatically on the next 2AM cron run when the ipset is rebuilt from scratch.

Permanent — does everything the penalty box does, plus appends the ASN to /etc/blocklist_asns.txt with a timestamp and an operator-supplied reason note. Persists forever.

Later, a third mode was added: --cidr accepts a single IP range for penalty box injection. CIDRs are never written to the permanent blocklist by design — they’re too specific and ephemeral for a long-term list.

The most important optimisation was replacing the per-route injection loop with a single ipset restore call. For a 500-route ASN, the old approach was 500 process forks and 500 kernel netlink calls. The new approach is one of each. The practical difference is roughly 5 seconds versus 50 milliseconds.

A before/after entry count snapshot provides transparent reporting on every injection — you know exactly how many routes were genuinely new versus already present.


The Bug That Was Hiding Everywhere

Late in the project, a test with CIDR 186.179.0.0/18 failed validation with “Invalid CIDR.” Tracing through the normalisation pipeline revealed a bug that had been quietly corrupting data all along.

The perl zero-stripping substitution s/(^|\.)0+\./$1./g was intended to fix malformed octets like 023.23. from RADB output. Instead, it matched any zero octet followed by a dot — including valid ones. 103.0.0.0/24 became 103..0.0/24. 5.0.0.0/8 became 5..0.0/8. Both silently failed validation and were dropped.

Every network with a zero in a non-terminal octet position — and there are many — had been invisible to the blocklist since the normalisation code was written.

The fix changes 0+ to 0+([0-9]), requiring the match to include at least one additional digit after the leading zeros. Lone zeros are left alone. The fix was applied to both enterprise_shield.sh and block_asn.sh.

# Before (broken)
perl -pe 's/(^|\.)0+\./$1./g'

# After (correct)
perl -pe 's/(^|\.)0+([0-9])\./$1$2./g'

Results

At the end of the session, the system was running with:

  • 343,966 blocked IP ranges loaded in the live ipset, consuming approximately 9.8MB of kernel memory
  • Boot-persistent protection — full blocklist restored within 3 seconds of kernel start, before UFW processes its first rule
  • Nightly automated updates at 2AM with delta checking, atomic swaps, and structured logging
  • On-demand injection for immediate response via block_asn.sh
  • Full documentation covering installation, operation, monitoring, and uninstall

The final cron run after all fixes produced:

[INFO ] --- Run complete: status=SUCCESS entries=343966 elapsed=76s ---

76 seconds. On an 18-year-old machine. For a complete rebuild of a 344,000-entry firewall blocklist from live external data sources.


What Made It Work

Looking back across the session, a few principles drove the outcomes:

Fix correctness before optimising. The original script had bugs that would have made any performance work meaningless. Getting it right first meant the parallel version had a solid foundation to build on.

Understand the failure modes of your tools. export -f failing silently. RADB returning empty responses instead of errors when rate-limited. ipset restore erroring on an existing set without -exist. None of these produced clear error messages. Each required understanding what the tool was supposed to do versus what it actually did under pressure.

Instrument everything. The structured logging, delta checks, and before/after entry counts weren’t cosmetic additions. They were what allowed us to diagnose the shrinking entry count issue (thread pressure), the double-logging issue (cron redirect + direct file append), and the missing public IP (lookup happening during UFW teardown).

Respect the hardware. Reducing threads from 8 to 4, using bash builtins instead of forking date on every log line, sorting in RAM with a 1GB buffer — these decisions were driven by understanding that a Core 2 Duo is not a cloud VM. It has constraints. Working within them produced a faster, more stable result than ignoring them.


The Machine

The 2008 MacBook 13” aluminum is not a recommended platform for production server workloads. It draws more power than a modern ARM server, runs warmer, and has a shorter remaining hardware lifespan than purpose-built server equipment.

It’s also, as of this writing, blocking nearly 344,000 hostile IP ranges, rebuilding its blocklist every night, surviving reboots gracefully, and responding to threats on demand in under a second.

Sometimes the best server is the one you already have.

Sticky post

Core Web Vitals and Web Performance Strategy: A Reality Check

Google’s Core Web Vitals initiative has become a larger part of discussions that we have with customers as they begin setting new performance KPIs for 2021-2022. These conversations center on the values generated by Lighthouse, WebPageTest, and Performance Insights testing, as well as the cumulative data collected by CrUX and Akamai mPulse and how to use the collected information to “improve” these numbers.

Google has delayed the implementation of Core Web Vitals into the Page Rank system twice. The initial rollout was scheduled for 2020, but that was delayed as the initial disruption caused by the pandemic saw many sites halt all innovation and improvement efforts until the challenges of a remote work environment could be overcome. The next target date was set for May 2021, but that has been pushed back to June 2021, with a phase-in period that will last until August 2021

Why the emphasis on improving the Core Web Vitals values? The simple reason is that these values will now be used as a factor in the Google Page Rank algorithm. Any input that modifies an organization’s SEO efforts immediately draws a great deal of attention as these rankings can have a measurable effect on revenue and engagement, depending on the customer.

While conversations may start with the simple request from customers for guidance around what they can do to improve their Core Web Vitals metrics, what may be missed in these conversations is a discussion of the wider context of what the Core Web Vitals metrics represent.

The best place is to define what the Core Web Vitals are (done by Google) and how the data is collected. The criteria for gathering Core Web Vitals in mPulse is:

Visitors who engage with the site and generate Page View or SPA Hard pages and are using recent versions of Chromium-based browsers.

However, there is a separate definition, the one that affects the Page Rank algorithm. For Page Rank data, the collection criteria gets a substantial refinement:

Visitors who engage with the site and generate Page View or SPA Hard pages who (it is assumed) originated from search engine results (preferably generated by Google) and are using the Chrome and Chrome Mobile browsers.

There are a number of caveats in both those statements! When described this way, customers may start to ask how relevant these metrics are for driving real-world performance initiatives and whether improving Core Web Vitals metrics will actually drive improvement in business KPIs like conversion, engagement, and revenue.

During conversations with customer, it is also critical to highlight the notable omissions in the collection of Core Web Vitals metrics. Some of these may cause customers to be even more cautious about applying this data.

  • No Data from WebKit Browsers. None of the browsers based on Webkit (Safari, Mobile Safari, Mobile Safari WebView) collect Cumulative Layout Shift or Largest Contentful Paint values. Recent updates have allowed for the collection of First Contentful Paint, but that is not one of the metrics used in Core Web Vitals. The argument can be made that Safari and Mobile Safari already deliver highly optimized web experiences, but not providing insight into a significant (if not dominant) user population will leave organizations wondering what global performance metrics (i.e., metrics collected by all browser families) they can use to represent and track the experience for all visitors.
  • Limitations in CrUX Data Collection. The data that Google collects for CrUX reporting only originates from Chrome and Chrome Mobile browsers. So, even though Chromium-based browsers, such as Edge and Opera, currently collect Core Web Vitals data, it is not used by Google in Page Rank. This narrow focus may further erode the focus on Core Web Vitals in organizations where Chrome and Chrome Mobile are only one part of a complex browser environment.
  • Performance Delta between Mobile Safari and Chrome Mobile. With only very limited exceptions, Mobile Safari substantially outperforms Chrome Mobile in standard performance measurement metrics (Time to Visually Ready, Page Load, etc.). This forces organizations to focus on optimizing Chrome Mobile performance, which is substantially more challenging due to the diversity in the Android device and OS population. As well, without a proven business reason, getting customers to update their mobile performance experience based on Core Web Vitals data could become challenging once this exception is realized.
  • Exclusion of SPA Soft Navigations. Up until recently, none of the Core Web Vitals metrics were captured for SPA Soft Navigations (see below for changes to Cumulative Layout Shift). This is understandable as the focus for Google is the performance of pages originating from Google Search Results, and navigating from the results will not generate a SPA Soft navigation. However, the performance and experience advantages of SPA Soft Navigations for visitors is almost completely lost to Core Web Vitals.
  • Current Lack of Clear Links Between Core Web Vitals and Business KPIs. Google has been emphasizing the Core Web Vitals as the new the new metrics that companies should use to guide performance decisions. However, there has yet to be much (or any) evidence that can be used to show organizations that improving these metrics leads to increased revenue, conversions, or engagement. Without quantifiable results that link these new performance KPIs to improvements in business KPIs, there may be hesitancy to drive efforts to improve these metrics.

Google is, however, listening to feedback on the collection of Core Web Vitals. Already there have been changes to the Cumulative Layout Shift (CLS) collection methodology that allow it to more accurately reflect long-running pages and SPA sites. This does lead to some optimism that the collection of Core Web Vitals data may evolve over time so that it includes a far broader subset of browsers and customer experiences, reflecting the true reality and complexity of customer interactions on the modern web application.

Exposing the Core Web Vitals metrics to a wider performance audience will lead to customer questions about web performance professionals are using this information to shape performance strategies. Overall, the recommendation thus far is to approach this data with caution and emphasize the current focus these metrics have (affecting Page Rank results), the limitations that exist in the data collection (limited browser support, lack of SPA Soft Navigations, mobile data only from Android), and the lack of substantial verification that improving Core Web Vitals has a quantifiable positive effect on business KPIs.

Sticky post

The Dog and The Toolbox: Using Web Performance Services Effectively

The Dog and The Toolbox

One day, a dog stumbled upon a toolbox left on the floor. There was a note on it, left by his master, which he couldn’t read. He was only a dog, after all.

He sniffed it. It wasn’t food. It wasn’t a new chew toy. So, being a good dog, he walked off and lay on his mat, and had a nap.

When the master returned home that night, the dog was happy and excited to see him. He greeted his master with joy, and brought along his favorite toy to play with.
He was greeted with yelling and anger and “bad dog”. He was confused. What had he done to displease his master? Why did the master keep yelling at him, and pointing at the toolbox. He had been good and left it alone. He knew that it wasn’t his.

With his limited understanding of human language, he heard the words “fix”, “dishwasher”, and “bad dog”. He knew that the dishwasher was the yummy cupboard that all of the dinner plates went in to, and came out less yummy and smelling funny.

He also knew that the cupboard had made a very loud sound that had scared the dog two nights ago, and then had spilled yucky water on the floor. He had barked to wake his master, who came down, yelling at the dog, then yelling at the machine.
But what did fix mean? And why was the master pointing at the toolbox?

The Toolbox and Web Performance

It is far too often that I encounter companies that have purchased Web performance service that they believe will fix their problems. They then pass the day-to-day management of this information on to a team that is already overwhelmed with data.

What is this team supposed to do with this data? What does it mean? Who is going to use it? Does it make my life easier?

When it comes time to renew the Web performance services, the company feels gipped. And they end up yelling at the service company who sold them this useless thing, or their own internal staff for not using this tool.

To an overwhelmed IT team, Web performance tools are another toolbox on the floor. They know it’s there. It’s interesting. It might be useful. But it makes no sense to them, and is not part of what they do.

Giving your dog the toolbox does not fix your dishwasher. Giving an IT team yet another tool does not improve the performance of a Web site.

Only in the hands of a skilled and trained team does the Web performance of a site improve, or the dishwasher get fixed. As I have said before, a tool is just a tool. The question that all organizations must face is what they want from their Web performance services.

Has your organization set a Web performance goal? How do you plan to achieve your goals? How will you measure success? Does everyone understand what the goal is?

After you know the answers to those questions, you will know that that as amazing as he is, your dog will not ever be able to fix your dishwasher.

But now you know who can.

The overuse of no-store in Cache-Control Headers

Many of the sites that I work with have this habit of using a browser Cache-Control header without fully understanding what it means:

cache-control: max-age=0, no-cache, no-store, private

Everything in that header is moot once no-store is added, as Cache-Control rules always default to the most restrictive directive in the list. So the effective set of caching rules defined by that group of directives equals

cache-control: no-store

Now, the issue comes when the visitor refreshes the page. They do not get the opportunity to REVALIDATE the content, as the browser has been told to completely block the content from being stored anywhere.

If the goal is to actually force a visitor to REVALIDATE the content on every page view, then use this instead:

cache-control: max-age=0, no-cache, private

While this set of directives would seemingly prevent any caching, its actual objective is to force the browser to process the content as if it is stale, and send an if-modified-since (including any relevant ETag information) to the server confirming if the content it has stored in a transitory state is still valid.

Performance a REVALIDATE rather than a full load reduces the amount of data transferred between client and server and can improve performance and reduce CDN costs, especially at scale.

My First Patagonia Catalog

[NOTE: This post is restored from the Wayback Machine. It was initially published December 22 2016 and lost during a database transfer sometime in the past. ]

I can’t remember the exact year, but I know it was in the late 1980s, when I got my first Patagonia Catalogue (I am Canadian after all). It opened my eyes to some amazing outdoor adventures, as well as introducing me to the history of the company – there was a long company history article among the pages.

The product I remember the most from the catalogue? The Ironworker Climbing Pants. The concept of these have stuck with me for nearly 30 years. Pants so tough that they could survive the abuse of an ironworker and a climber on Half Dome.

But also remember the crazy sailing and fishing products that they had. It impressed me that the people who worked for Patagonia and designed the products weren’t just crazy stonewallers, but wanted to be a part of the outdoors, no matter where the outdoors were.

I have never owned anything from Patagonia. My kids wore Patagonia, when they were younger, as we had a fantastic Goodwill store when we lived in San Mateo and people were dropping off some amazing stuff during the crazy years of the boom.

As I have gotten older and more sedentary, I likely can’t fit into any of their products with my spreading middle-aged frame. I could buy some knock-off or one of the amazing brands that has appeared in the intervening years (I see The North Face everywhere right now – is this a hot brand or just better marketing?).

But this has not stopped my love of (and lust for) Patagonia products. Why would I desire something I could never get into or have any need for?

For the same reason I appreciate anything: the love Patagonia puts into their designs, the simplicity of their complexity, and the pride people have who wear their products not just as a fashion statement, but because they understand what Patagonia stands for.

Link Fixing and the Wayback Machine

This blog has been around for a long time, moved several times (both in hardware and physical locations), been ignored, and has become broken.

Since the start of the month (April 2022) I have been restoring the links and images on this blog from the Internet Archive’s Wayback Machine. If you didn’t want it out there anymore, the Wayback Machine will find it.

It will likely take time to restore the glory that once was the Newest Industry blog, and, yes, some posts will be removed, but it’s coming back.

Web Caching and Old Age

In 2002, I was invited to speak at an IBM conference in San Francisco. When it came time to give my presentation, no one showed up.

I had forgotten about it until I was perusing the Wayback Machine and found the PDF of my old presentation.

The interesting thing is that the discussion in this doc is still relevant, even though the web is a very different beast than it was in 2002. Caching headers and their deployment have not changed much since they were introduced.

And there are still entities out there who get them wrong.


If you like ancient web docs, check out what webperformance.org looked like in 2007. [Courtesy of the Wayback Machine]

Covid Daily Stats and the Question of China

One of my favorite places to get Covid stats from is the Our World In Data data explorer. They aggregate all the stats into a number of great visualizations that you can share with friends.

In this data are nuggets of information that get lost when you are surrounded by North American media. For example, did you in North America know that France and Germany are centers of a new European Covid wave in April 2022?

In this cascade of data are some interesting signs of how our world really uses and abuses information. The NY Times reported on some of the weird Covid data emerging from China (Shanghai’s Low Covid Death Toll Revives Questions About China’s Numbers). The Our World In Data charts show just how unusual this information is.

Covid Case Counts – China

Covid Daily Deaths – China

While I believe that the methods used to control Covid in China are aggressive, they cannot be this successful. Full stop. The case counts are far lower than anywhere else in the world and the confirmed deaths are, well, remarkably low.

Unbelievably low, upon sober thought.

The battle that democratic India is waging to control the release of statistical models of their actual mortality rate change during Covid (India Is Stalling the W.H.O.’s Efforts to Make Global Covid Death Toll Public) shines a brighter light on a country with tighter controls that is able to bury the actual mortality rate more effectively.

Throughout the Pandemic, there have been two battles: one to control the disease; the other to control the facts about the disease. In North America, the disinformation campaign has been incredibly strong; in China, it pales besides the no information campaign

Taking this data, anyone can shape a narrative that reflects their world view. But what narrative can you shape about no data?

Hüsker Dü’s Newest Industry and The World Today

This blog used to have a different name, but a few years ago, I let the registration of the domain lapse and someone else snapped it up. It was based around a Hüsker Dü song from Zen Arcade.

I’ve been listening to that album a lot lately, and this song keeps standing out as a timeless reminder of what we will do to ourselves if things get out of hand.

Listen carefully; these lyrics from 1982/3 still have a deep meaning.

Old Hardware is Still Good Hardware

NOTE: This is a retelling of a post from 14 years ago.

I have a thing for re-using old hardware for server equipment. This is odd given that the ease in deploying apps/sites/cat pictures on shiny cloud services, but I am old school and prefer to be able to put my hands on the devices that serve my stuff.

Currently, the 2008 First Generation Aluminum MacBook is running Ubuntu Server and delivering the content you are looking at. Previously, it had been hosted on one of the Raspberry Pi 3b+ machines you see in the background, but I figured it was time for an “upgrade”. I have an old Dell desktop machine under my desk that I may repurpose to run the upgraded version of Ubuntu LTS, but that is a project for the summer.

’

In the past, back when we lived in Massachusetts, I had a hodgepodge rack of devices ranging from ancient Dell desktops, old server machines, and a bunch of hopes and dreams.

At least with the new setup, I don’t have to worry about there being an inch of water in the basement after a heavy rain.

There have been stories around for the last month that suddenly make “old” hardware shiny again – install ChromeOS Flex! Well, that’s not the only use that old machines have.

Servers can run on just about any platform. Even if it’s just a local DNS or MX server, it doesn’t need to go on the trash heap.

It’s not just you who can benefit from recycling or donating still working computers and equipment. Not everyone has access to the best and the shiniest, but that may not be what they need. Giving a family an old laptop that can run ChromeOS Flex may make it easier to raise them above the digital poverty line. That old iPhone or Android Device that you aren’t using could make it easier for a family to stay in touch.

If you aren’t using it anymore, donate or recycle it appropriately. Info on donating and recycling your old hardware is available here.

Old to you may be amazing to someone else.

Copyright © 2026 Performance Zen

Theme by Anders NorenUp ↑