Original Art – Hackaday https://hackaday.com Fresh hacks every day Tue, 29 Oct 2024 06:17:12 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 156670177 Boss Byproducts: Fulgurites Are Fossilized Lightning https://hackaday.com/2024/10/29/boss-byproducts-fulgurites-are-fossilized-lightning/ https://hackaday.com/2024/10/29/boss-byproducts-fulgurites-are-fossilized-lightning/#comments Tue, 29 Oct 2024 17:00:19 +0000 https://hackaday.com/?p=707737&preview=true&preview_id=707737 So far in this series, we’ve talked about man-made byproducts — Fordite, which is built-up layers of cured car enamel, and Trinitite, which was created during the first nuclear bomb …read more]]>

So far in this series, we’ve talked about man-made byproducts — Fordite, which is built-up layers of cured car enamel, and Trinitite, which was created during the first nuclear bomb test.

A fulgurite pendant.
A lovely fulgurite pendant. Image via Etsy

But not all byproducts are man-made, and not all of them are basically untouchable. Some are created by Mother Nature, but are nonetheless dangerous. I’m talking about fulgurites, which can form whenever lightning discharges into the Earth.

It’s likely that even if you’ve seen a fulgurite, you likely had no idea what it was. So what are they, exactly? Basically, they are natural tubes of glass that are formed by a fusion of silica sand or rock during a lightning strike.

Much like Lichtenberg figures appear across wood, the resulting shape mimics the path of the lightning bolt as it discharged into the ground. And yes, people make jewelry out of fulgurites.

Lightning Striking Again

Lightning striking a tree. Poor tree.
Image via NOAA’s National Severe Storms Laboratory

Lightning is among the oldest observed phenomena on Earth. You probably know that lightning is just a giant spark of electricity in the atmosphere. It can occur between clouds, the air, or the ground and often hits tall things like skyscrapers and mountaintops.

Lightning is often visible during volcanic eruptions, intense forest fires, heavy snowstorms, surface nuclear detonations, and of course, thunderstorms.

In lightning’s infancy, air acts as an insulator between charges — the positive and negative charges between the cloud and the ground. Once the charges have sufficiently built up, the air’s insulating qualities break down and the electricity is rapidly discharged in the form of lightning.

When lightning strikes, the energy in the channel briefly heats up the air to about 50,000 °F, which is several times the surface of the Sun. This makes the air explode outward. As the shock wave’s pressure decreases, we hear thunder.

Of Sand and Rock and Other Stuff

Fulgurites, also known as fossilized lightning, don’t have a fixed composition: they are composed of whatever they’re composed of at the time of the lightning strike. Four main types of fulgurites are officially recognized: sand, soil, caliche (calcium-rich), and  rock fulgurites. Sand fulgurites can usually be found on beaches or in deserts where clean sand devoid of silt and clay dominates. And like those Lichtenberg figures, sand fulgurites tend to look like branches of tubes. They have rough surfaces comprised of partially-melted grains of sand.

An assortment of sand fulgurites.
Sand fulgurites, aka forbidden churros. Image via Wikimedia Commons

When sand fulgurites are formed, the sand rapidly cools and solidifies. Because of this, they tend to take on a glassy interior. As you might imagine, the size and shape of a fulgurite depends on several factors, including the strength of the strike and the depth of the sand being struck. On average, they are 2.5 to 5 cm in diameter, but have been found to exceed 20 cm.

Soil fulgurites can form in a wide variety of sediment compositions including clay-, silt-, and gravel-rich soils as well as leosses, which are wind-blown formations of accumulated dust. These also appear as tubaceous or branching formations, vesicular, irregular, or a combination thereof.

Calcium-rich sediment fulgurites have thick walls and variable shapes, although it’s common for multiple narrow channels to appear. These can run the gamut of morphological and structural variation for objects that can be classified as fulgurites.

Rock fulgurites are typically found on mountain peaks, which act as natural lightning rods. They appear as coatings or crusts of glass formed on rocks, either found as branching channels on the surface, or as lining in pre-existing fractures in the rock. They are most often found at the summit or within several feet of it.

Fact-Finding Fulgurites

Aside from jewelry and such, fulgurites’ appeal comes in wherever they’re found, as their presence can be used to estimate the number of lightning strikes in an area over time.

Then again there’s some stuff you may not necessarily want to use in jewelry making. Stuff that can be found in the dark, dank corners of the Earth. Stay tuned!

]]>
https://hackaday.com/2024/10/29/boss-byproducts-fulgurites-are-fossilized-lightning/feed/ 15 707737 FossilizedLightning A fulgurite pendant. Lightning striking a tree. Poor tree. An assortment of sand fulgurites.
FreeBSD at 30: the History and Future of the Most Popular BSD-Based OS https://hackaday.com/2024/10/28/freebsd-at-30-the-history-and-future-of-the-most-popular-bsd-based-os/ https://hackaday.com/2024/10/28/freebsd-at-30-the-history-and-future-of-the-most-popular-bsd-based-os/#comments Mon, 28 Oct 2024 14:00:49 +0000 https://hackaday.com/?p=603043 Probably not too many people around the world celebrated November 1st, 2023, but on this momentous date FreeBSD celebrated its 30th birthday. As the first original fork of the first …read more]]>

Probably not too many people around the world celebrated November 1st, 2023, but on this momentous date FreeBSD celebrated its 30th birthday. As the first original fork of the first complete and open source Unix operating system (386BSD) it continues the legacy that the Berkeley Software Distribution (BSD) began in 1978 until its final release in 1995. The related NetBSD project saw its beginnings somewhat later after this as well, also forking from 386BSD. NetBSD saw its first release a few months before FreeBSD’s initial release, but has always followed a different path towards maximum portability unlike the more generic nature of FreeBSD which – per the FAQ – seeks to specialize on a limited number of platforms, while providing the widest range of features on these platforms.

This means that FreeBSD is equally suitable for servers and workstations as for desktops and embedded applications, but each platform gets its own support tier level, with the upcoming version 15.x release only providing first tier support for x86_64 and AArch64 (ARMv8). That said, if you happen to be a billion-dollar company like Sony, you are more than welcome to provide your own FreeBSD support. Sony’s Playstation 3, Playstation 4 and Playstation 5 game consoles namely all run FreeBSD, along with a range of popular networking and NAS platforms from other big names. Clearly, it’s hard to argue with FreeBSD’s popularity.

Despite this, you rarely hear people mention that they are running FreeBSD, unlike Linux, so one might wonder whether there is anything keeping FreeBSD from stretching its digital legs on people’s daily driver desktop systems?

In The Beginning There Was UNIX

Once immortalized on the silver screen with the enthusiastically spoken words “It’s a UNIX system. I know this.”, the Unix operating system (trademarked as UNIX) originated at Bell Labs where it initially was only intended for internal use to make writing and running code for systems like the PDP-11 easier. Widespread external use started with Version 6, but even before that it was the starting point for what came to be known as the Unix-based OSes:

Diagram showing the key Unix and Unix-like operating systems (Credit: Eraserhead1, Infinity0, Sav_vas)
Diagram showing the key Unix and Unix-like operating systems (Credit: Eraserhead1, Infinity0, Sav_vas, Wikimedia)

After FreeBSD and NetBSD forked off the 386BSD codebase, both would spawn a few more forks, most notable being OpenBSD which was forked off NetBSD by Theo de Raadt when he was (controversially) removed from the project. From FreeBSD forked the Dragonfly BSD project, while FreeBSD is mostly used directly for specific applications, such as GhostBSD providing a pleasant desktop experience with preconfigured desktop and similar amenities, and pfSense for firewall and router applications. Apple’s Darwin that underlies OS X and later contains a significant amount of FreeBSD code as well.

Overall, FreeBSD is the most commonly used of these OSS BSDs and also the one you’re most likely to think of when considering using a BSD, other than OS X/MacOS, on a desktop system.

Why FreeBSD Isn’t Linux

Screenshot of Debian GNU Hurd with Xfce desktop environment (Credit: VulcanSphere, Wikimedia)
Screenshot of Debian GNU/Hurd with Xfce desktop environment (Credit: VulcanSphere, Wikimedia)

The Linux kernel is described as ‘Unix-like’, as much like Minix it does not directly derive from any Unix or BSD but does provide some level of compatibility. A Unix OS meanwhile is the entirety of the tools and applications (‘userland’) that accompany it, something which is provided for Linux-based distributions most commonly from the GNU (‘GNU is Not Unix’) project, ergo these Linux distributions are referred to as GNU/Linux-based to denote their use of the Linux kernel and a GNU userland. There is also a version of Debian which uses GNU userland and the FreeBSD kernel, called Debian GNU/kFreeBSD, alongside a (also Unix-like) Hurd kernel-based flavor of Debian (Debian GNU/Hurd).

In terms of overall identity it’s thus much more appropriate to refer to ‘Linux kernel’ and ‘GNU userland’ features in the context of GNU/Linux, which contrasts with the BSD userland that one finds in the BSDs, including modern-day MacOS. It is this identity of kernel- and userland that most strongly distinguishes these various operating systems and individual distributions.

These differences result in a number of distinguishing features, such as the kernel-level FreeBSD jail feature that can virtualize a single system into multiple independent ones with very little overhead. This is significantly more secure than a filesystem-level chroot jail, which was what Unix originally came with. For other types of virtualization, FreeBSD offers bhyve, which can be contrasted with the kernel-based virtualization machine (KVM) in the Linux kernel. Both of these are hypervisor/virtual machine managers that can run a variety of guest OSes. As demonstrated in a comparison by Jim Salter, between bhyve and KVM there is significant performance difference, with bhyve/NVMe on FreeBSD 13.1 outperforming KVM/VirtIO on Ubuntu 22.04 LTS by a large margin.

What this demonstrates is why FreeBSD for storage and server solutions is such a popular choice, and likely why Sony picked FreeBSD for its customized Playstation operating systems, as these gaming consoles rely heavily on virtualization, as with e.g. the PS5 hypervisor.

OpenZFS And NAS Things

A really popular application of FreeBSD is in Network-Attached Storage (NAS), with originally FreeNAS (now TrueNAS) running the roost here, with iXsystems providing both development and commercial support. Here we saw some recent backlash, as iXsystems announced that they will be adding a GNU/Linux-based solution (TrueNAS SCALE), while the FreeBSD-based version (TrueNAS CORE) will remain stuck on FreeBSD version 13. Here The Register confirmed with iXsystems that this effectively would end TrueNAS on FreeBSD. Which wouldn’t be so bad if performance on Linux wasn’t noticeably worse as covered earlier, and if OpenZFS on Linux wasn’t so problematic.

SAS storage bays in Huawei RH2288H V2 Rack Server. (Source: Wikimedia)
SAS storage bays in Huawei RH2288H V2 Rack Server. (Source: Wikimedia)

Unlike with FreeBSD where the ZFS filesystem is an integral part of the kernel, ZFS on Linux is more of an afterthought, with a range of different implementations that each have their own issues, impacting performance and stability. This means that TrueNAS on Linux will be less stable, slower and also use more RAM. Fortunately, as befits an open source ecosystem, an alternative exists in the form of XigmaNAS which was forked from FreeNAS and follows current FreeBSD fairly closely.

 

So what is the big deal with ZFS? Originally developed by Sun for the Solaris OS, it was released under the open source CDDL license and is the default filesystem for FreeBSD. Unlike most other filesystems, it is both the filesystem and volume manager, which is why it natively handles features such as RAID, snapshots and replication. This also provides it with the ‘self-healing’ ability where some degree of data corruption is detected and corrected, without the need for dedicated RAID controllers or ECC RAM.

For anyone who has had grief with any of the Ext*, Reiserfs or other filesystems (journaled or not) on Linux, this probably sounds pretty good, and its tight integration into FreeBSD again explains why it’s it’s such a popular choice for situations where data integrity, performance and stability are essential.

FreeBSD As A Desktop

It’s probably little surprise that FreeBSD-as-a-desktop is almost boringly similar to GNU/Linux-as-a-desktop, running the Xorg server and one’s desktop environment (DE) of choice. Which also means that it can be frustratingly broken, as I found out while trying to follow the instructions in the FreeBSD handbook for setting up Xfce. This worked about as well as my various attempts over the years to get to a working startx on Debian and Arch. Fortunately trying out another guide on the FreeBSD Foundation site quickly got me on the right path. This is where using GhostBSD (using the Mate DE by default) is a timesaver if you want to use a GUI with your FreeBSD but would like to skip the ‘deciphering startx error messages’ part.

After installation of FreeBSD (with Xfce) or GhostBSD, it’s pretty much your typical desktop experience. You got effectively the same software as on a GNU/Linux distro, with FreeBSD even providing binary (user-space) compatibility with Linux and with official GPU driver support from e.g. NVidia (for x86_64). If you intend to stick to the desktop experience, it’s probably quite unremarkable from here onwards, minus the use of the FreeBSD pkg (and source code ports) package manager instead of apt, pacman, etc.

Doing Some Software Porting

One of my standard ways to test out an operating system is to try and making some of my personal open source projects run on it, particularly NymphCast as it takes me pretty deep through the bowels of the OS and its package management system. Since NymphCast already runs on Linux, this should be a snap, one would think. As it turns out, this was mostly correct. From having had a play with this on FreeBSD a few years ago I was already aware of a few gotchas, such as the difference between GNU make and BSD make, with the former being available as the gmake package and command.

Another thing you may want to do is set up sudo (also a package) as this is not installed by default. After this it took me a few seconds to nail down the names of the dependencies to install via the FreeBSD Ports site, which I added to the NymphCast dependencies shell script. After this I was almost home-free, except for some details.

These details being that on GhostBSD you need to install the GhostBSD*-dev packages to do any development work, and after some consulting with the fine folks over at the #freebsd channel on Libera IRC I concluded that using Clang (the system default) to compile everything instead of GCC would resolve the quaint linker errors, as both apparently link against different c++ libraries (clang/libc++ vs gcc/libstdc++).

This did indeed resolve the last issues, and I had the latest nightly of NymphCast running on FreeBSD 14.1-RELEASE, playing back some videos streaming from Windows & Android systems. Not that this was shocking, as the current stable version is already up on Ports, but that package’s maintainer had make similar tweaks (gmake and use of clang++) as I did, so this should make their work easier for next time.

FreeBSD Is Here To Stay

I’ll be the first to admit that none of the BSDs really were much of a blip on my radar for much of the time that I was spending time with various OSes. Of course, I got lured into GNU/Linux with the vapid declarations of the ‘Year of the Linux Desktop’ back in the late 90s, but FreeBSD seems to always have been ‘that thing for servers’. It might have been just my fascination with porting projects like NymphCast to other platforms that got me started with FreeBSD a few years ago, but the more you look into what it can do and its differences with other OSes, the more you begin to appreciate how it’s a whole, well-rounded package.

At one point in time I made the terrible mistake of reading the ‘Linux From Scratch’ guide, which just reinforced how harrowingly pieced together Linux distributions are. Compared to the singular code bases of the BSDs, it’s almost a miracle that Linux distributions work as well as they do. Another nice thing about FreeBSD is the project structure, with no ‘Czar for life’, but rather a democratically elected core leadership. In the 30-year anniversary reflection article (PDF) in FreeBSD Journal the way this system was created is described. One could say that this creates a merit-based system that rewards even newcomers to the project. As a possible disadvantage, however, it does not create nearly the same clickbait-worthy headlines as another Linus Torvalds rant.

With widespread industry usage of FreeBSD and a strong hobbyist/enthusiast core, it seems fair to say that FreeBSD’s future looks brighter than ever. With FreeBSD available for easy installation on a range of SBCs and running well in a virtual machine, it’s definitely worth it to give it a try.

]]>
https://hackaday.com/2024/10/28/freebsd-at-30-the-history-and-future-of-the-most-popular-bsd-based-os/feed/ 33 603043 BSD Diagram showing the key Unix and Unix-like operating systems (Credit: Eraserhead1, Infinity0, Sav_vas) Screenshot of Debian GNU Hurd with Xfce desktop environment (Credit: VulcanSphere, Wikimedia) SAS storage bays in Huawei RH2288H V2 Rack Server. (Source: Wikimedia)
Will .IO Domain Names Survive A Geopolitical Rearrangement? https://hackaday.com/2024/10/23/will-io-domain-names-survive-a-geopolitical-rearrangement/ https://hackaday.com/2024/10/23/will-io-domain-names-survive-a-geopolitical-rearrangement/#comments Wed, 23 Oct 2024 14:00:34 +0000 https://hackaday.com/?p=728538 The Domain Name System (DNS) is a major functional component of the modern Internet. We rely on it for just about everything! It’s responsible for translating human-friendly domain names into …read more]]>

The Domain Name System (DNS) is a major functional component of the modern Internet. We rely on it for just about everything! It’s responsible for translating human-friendly domain names into numerical IP addresses that get traffic where it needs to go. At the heart of the system are the top-level domains (TLDs)—these sit atop the whole domain name hierarchy.

You might think these TLDs are largely immutable—rock solid objects that seldom change. That’s mostly true, but the problem is that these TLDs are sometimes linked to real-world concepts that are changeable. Like the political status of various countries! Then, things get altogether more complex. The .io top level domain is the latest example of that.

A Brief History

ICANN is the organization in charge of TLDs.

Before we get into the current drama, we should explain some background around top level domains. Basically, as the Internet started to grow out of its early nascent form, there was a need to implement a proper structured naming system for online entities. In the mid-1980s, the Internet Assigned Numbers Authority (IANA) introduced a set of original top level domains to categorize domain names. These were divided into two main types—generic top-level domains, and country code top-level domains. The generic TLDs are the ones we all know and love—.com, .org, .net, .edu, .gov, and .mil. The country codes, though, were more complex.

Initially, the country codes were based around the ISO 3166-1 alpha-2 standard—two letter codes to represent all necessary countries. These were, by and large, straightforward—the United Kingdom got .uk, Germany got .de, the United States got .us, and Japan got .jp.

Eventually, management of TLDs was passed from IANA to a new organization called ICANN—Internet Corporation for Assigned Names and Numbers. Over time, ICANN has seen fit to add more TLDs to the official list. That’s why today, you can register a domain with a .biz, .info, or .name registration. Or .horse, .Dad, .Foo, or so many others besides. 

Wikipedia maintains an interactive decoding table that covers the full ISO 3166-1 alpha-2 code space, as used to designate ccTLDs. Credit: Wikipedia

 

What’s With .io?

The official logo of the .io ccTLD. The Internet Computer Bureau Ltd. is the registry organization in charge of it. 

Over the past 20 years or so, the .io domain has become particularly popular with the tech set—the initialism recalls the idea of input/output. Thus, you have websites like Github.io or Hackaday.io using a country-code TLD for vanity purposes. It’s pretty popular in the tech world.

This was never supposed to be the case, however. The domain was originally designated for the British Indian Ocean Territory, all the way back in 1997. This is a small overseas territory of the United Kingdom, which occupies a collection of islands of the Chagos Archipelago. Total landmass of the territory is just 60 square kilometers. The largest island is Diego Garcia, which plays host to a military facility belonging to the UK and the United States. Prior to their removal by British authorities in 1968, the island played host to a population of locals known as Chagossians.

The flag of the British Indian Ocean Territory. Not even kidding.

The territory has been the subject of some controversy, often concerning the Chagossians and their wish to return to the land. More recently, the Mauritian government has made demands for the British government to relinquish the islands. The East African nation considers that the islands should have been handed back when Mauritius gained independence in 1968.

Recent negotiations have brought the matter to a head. On October 3, the British and Mauritius governments came to an agreement that the UK would cede sovereignty over the islands, and that they would hence become part of Mauritius. The British Indian Ocean Territory would functionally cease to exist, though the UK would maintain a 99-year lease over Diego Garcia and continue to maintain the military facility there.

The key problem? With the British Indian Ocean Territory no longer in existence, it would thus no longer be eligible for a country-code TLD. According to IANA, ccTLDs are based on the ISO 3166-1 standard. When a country ceases to exist, it is removed from the standard, and thus, the ccTLD is supposed to be retired in turn. IANA states protocol is to notify the manager of the ccTLD and remove it after five years by default. Managers can ask for an extension, limited to another five years for a total of ten years maximum. Alternatively, a ccTLD manager may allow the domain to be retired early at their own discretion.

However, as per The Register, the situation is more complex. The outlet spoke to ICANN, which is the organization actually in charge of declaring valid TLDs. A spokesperson provided the following comment:

ICANN relies on the ISO 3166-1 standard to make determinations on what is an eligible country-code top-level domain. Currently, the standard lists the British Indian Ocean Territory as ‘IO’. Assuming the standard changes to reflect this recent development, there are multiple potential outcomes depending on the nature of the change.

One such change may involve ensuring there is an operational nexus with Mauritius to meet certain policy requirements. Should ‘IO’ no longer be retained as a coding for this territory, it would trigger a 5-year retirement process described at [the IANA website], during which time registrants may need to migrate to a successor code or an alternate location.

We cannot comment on what the ISO 3166 Maintenance Agency may or may not do in response to this development. It is worth noting that the ISO 3166-1 standard is not just used for domain names, but many other applications. The need to modify or retain the ‘IO’ encoding may be informed by needs associated with those other purposes, such as for Customs, passports, and banking applications.

The Chagos Archipelago is, genuinely, a long way from everywhere. Credit: TUBS, CC BY-SA 3.0

Basically, ICANN passed the buck, putting the problem at the feet of the International Standards Organization which maintains ISO 3166-1. If the ISO standard maintains the IO designation for some reason, it appears that ICANN would probably follow suit. If ISO drops it for some reason, it could be retired as a ccTLD.

The Register notes that the .io record in ISO 3166-1 has not changed since a minor update in 2018. Any modification by ISO would be unlikely before the treaty between the UK and Mauritius is ratified in 2025. At that point, the five year clock could start ticking.

However, history is a great educator in this regard. There’s another grand example of a country that functionally ceased to exist. In 1991, the Soviet Union was no longer a going concern. And yet, the .su designation remains “exceptionally reserved” in the ISO 3166-1 standard at the request of the Foundation for Internet Development. However, the entry notes it was “removed from ISO 3166-1 in 1992” when the USSR broke up into its constituent states. Those states were all given their own country codes, except for Ukraine and Belarus, which had already entered ISO 3166 before this point.

.su domains are still very much a going concern, 33 years after the fall of the Soviet Union.

But can you still get a .su domain? Well, sure! Netim.com will happily register one for you. A number of websites still use the TLD, like this one, and it has reportedly become a popular TLD for cybercriminal activity. The current registry is the Russian Institute for Public Networks, and .su domains persist despite efforts by ICANN to end its use in 2007.

Given .io is so incredibly popular, it’s unlikely to disappear just because of some geopolitical changes. Even if it were to be designated for retirement, it would probably stick around for another five to ten years based on existing regulations. More likely, though, special effort will be made to officially reserve .io for continued use. Heck, even if ISO drops it, it could become a regular general TLD instead. If .pizza can be a domain, surely .io can be as well.

Long story short? There are questions around the future of .io, but nothing’s been decided yet. Expect vested interests to make sure it sticks around for the foreseeable future.

 

]]>
https://hackaday.com/2024/10/23/will-io-domain-names-survive-a-geopolitical-rearrangement/feed/ 44 728538 IOend
Tech In Plain Sight: Tasers Shooting Confetti https://hackaday.com/2024/10/16/tech-in-plain-sight-tasers-shooting-confetti/ https://hackaday.com/2024/10/16/tech-in-plain-sight-tasers-shooting-confetti/#comments Wed, 16 Oct 2024 14:00:59 +0000 https://hackaday.com/?p=724875 One of the standard tropes in science fiction is some kind of device that can render someone unconscious — you know, like a phaser set to stun. We can imagine …read more]]>

One of the standard tropes in science fiction is some kind of device that can render someone unconscious — you know, like a phaser set to stun. We can imagine times when being aggressively knocked out would lead to some grave consequences, but — we admit — it is probably better than getting shot. However, we don’t really have any reliable technology to do that today. However, if you’ve passed a modern-day policeman, you’ve probably noticed the Taser on their belt. While this sounds like a phaser, it really isn’t anything like it. It is essentially a stun gun with a long reach thanks to a wire with a dart on the end that shoots out of the gun-like device and shocks the target at a distance. Civilian Tasers have a 15-foot long wire, while law enforcement can get longer wires. But did you know that modern Tasers also fire confetti?

A Taser cartridge and some AFIDs

It sounds crazy, and it isn’t celebratory. The company that makes the Taser — formerly, the Taser company but now Axon — added the feature because of a common complaint law enforcement had with the device. Interestingly, many things that might be used in comitting a crime are well-understood. Ballistics can often identify that a bullet did or did not come from a particular weapon, for example. Blood and DNA on a scene can provide important clues. Even typewriters and computer printers can be identified by variations in their printing. But if you fire a taser, there’s generally little evidence left behind.

Well, that was true until the AFIDs (Anti Felony Identification) came on the scene in 1993. The Taser uses a cartridge that has one or more spools of wire. When you fire the unit, you remove the cartridge and replace it with a new one. The cartridge also has some kind of propellant that fires the dart and wire. Early cartridges used gunpowder, although the newer ones appear to utilize gas capsules. The wire moves between 180 and 205 feet per second. But modern ones also have a few dozen very small disks inside that spew out under the force of the propellant. Each tag has a unique serial number for that cartridge.

Sure, if you have time, you could sweep up the 20 or 30 little tags. But they are less than a quarter of an inch around and disperse widely. Plus, you can’t be sure exactly how many tags are in any given cartridge, so you’d have to be very thorough. In fact, it is hard enough for investigators to find them when they want to. The tags are colorful and show up better when using special lights.

This isn’t just theoretical, by the way. Milwaukee police used AFIDs to track down a thief who stunned a musician and made off with a 300-year-old Stradivarius violin worth about $5 million. In another case, a man did extensive research about killing his boss to avoid being caught embezzling. He used a Taser to subdue his victim and knew to vacuum up the AFIDs, but didn’t get them all, allowing police to identify him as the killer.

Some printers and copiers leave digital fingerprints, too. On the other hand, some people seem to enjoy getting the occasional jolt of voltage.

]]>
https://hackaday.com/2024/10/16/tech-in-plain-sight-tasers-shooting-confetti/feed/ 40 724875 Tasers
Lagrange Points and Why You Want to Get Stuck At Them https://hackaday.com/2024/10/09/lagrange-points-and-why-you-want-to-get-stuck-at-them/ https://hackaday.com/2024/10/09/lagrange-points-and-why-you-want-to-get-stuck-at-them/#comments Wed, 09 Oct 2024 14:00:03 +0000 https://hackaday.com/?p=725182 Orbital mechanics is a fun subject, as it involves a lot of seemingly empty space that’s nevertheless full of very real forces, all of which must be taken into account …read more]]>
Visualization of the Sun-Earth Lagrange points.

Orbital mechanics is a fun subject, as it involves a lot of seemingly empty space that’s nevertheless full of very real forces, all of which must be taken into account lest one’s spacecraft ends up performing a sudden lithobraking maneuver into a planet or other significant collection of matter in said mostly empty space. The primary concern here is that of gravitational pull, and the way it affects one’s trajectory and velocity. With a single planet providing said gravitational pull this is quite straightforward to determine, but add in another body (like the Moon) and things get trickier. Add another big planetary body (or a star like our Sun), and you suddenly got yourself the restricted three-body problem, which has vexed mathematicians and others for centuries.

The three-body problem concerns the initial positions and velocities of three point masses. As they orbit each other and one tries to calculate their trajectories using Newton’s laws of motion and law of universal gravitation (or their later equivalents), the finding is that of a chaotic system, without a closed-form solution. In the context of orbital mechanics involving the Earth, Moon and Sun this is rather annoying, but in 1772 Joseph-Louis Lagrange found a family of solutions in which the three masses form an equilateral triangle at each instant. Together with earlier work by Leonhard Euler led to the discovery of what today are known as Lagrangian (or Lagrange) points.

Having a few spots in an N-body configuration where you can be reasonably certain that your spacecraft won’t suddenly bugger off into weird directions that necessitate position corrections using wasteful thruster activations is definitely a plus. This is why especially space-based observatories such as the James Webb Space Telescope love to hang around in these spots.

Stable and Unstable Stable

Although the definition of Lagrange points often makes it sound like you can put a spacecraft in that location and it’ll remain there forever, it’s essential to remember that ‘stationary’ only makes sense in particular observer’s reference frame. The Moon orbits the Earth, which orbits the Sun, which ultimately orbits the center of the Milky Way, which moves relative to other galaxies. Or it’s just the expansion of space-time which make it appear that the Milky Way moves, but that gets one quickly into the fun corners of theoretical physics.

A contour plot of the effective potential defined by gravitational and centripetal forces. (Credit: NASA)
A contour plot of the effective potential defined by gravitational and centripetal forces. (Credit: NASA)

Within the Earth-Sun system, there are five Lagrange points (L1 – L5), of which L2 is currently the home of the James Webb Space Telescope (JWST) and was the home to previous observatories (like the NASA WMAP spacecraft) that benefit from always being in the shadow of the Earth. Similarly, L1 is ideal for any Sun observatory, as like L2 it is located within easy communication distance

Perhaps shockingly, the L3 point is not very useful to put any observatories or other spacecraft, as the Sun would always block communication with Earth. What L3 has in common with L1 and L2 is that all of these are unstable Lagrange points, requiring course and attitude adjustments approximately every 23 days. This contrasts with L4 and L5, which are the two ‘stable’ points. This can be observed in the above contour plot, where L4 and L5 are on top of ‘hills’ and L1 through L3 are on ‘saddles’ where the potential curves up in one direction and down another.

One way to look at it is that satellites placed in the unstable points have a tendency to ‘wander off’, as they don’t have such a wide region of relatively little variance (contour lines placed far from each other) as L4 and L5 do. While this makes these stable points look amazing, they are not as close to Earth as L1 and L2, and they have a minor complication in the fact that they are already occupied, much like the Earth-Moon L4 and L5 points.

Because of how stable the L4 and L5 points are, the Earth-Moon system ones have found themselves home to the Kordylewski clouds. These are effectively concentrations of dust which were first photographed by Polish astronomer Kazimierz Kordylewski in 1961 and confirmed multiple times since. Although a very faint phenomenon, there are numerous examples of objects caught at these points in e.g. the Sun-Neptune system (Neptune trojans) and the Sun-Mars system (Mars trojans). Even our Earth has picked up a couple over the years, many of them asteroids. Of note that is the Earth’s Moon is not in either of these Lagrange points, having become gravitationally bound as a satellite.

All of which is a long way to say that it’s okay to put spacecraft in L4 and L5 points as long as you don’t mind fragile technology sharing the same region of space as some very large rocks, with an occasional new rocky friend getting drawn into the Lagrange point.

Stuff in Lagrange Points

A quick look at the Wikipedia list of objects at Lagrange points provides a long list past and current natural and artificial objects at these locations, across a variety of system. Sticking to just the things that we humans have built and sent into the Final Frontier, we can see that only the Sun-Earth and Earth-Moon systems have so far seen their Lagrange points collect more than space rocks and dust.

Starting with Sun-Earth, the L1 point has:

  • Solar and Heliospheric Observatory (SOHO, ESA)
  • Advanced Composition Explorer (ACE, NASA)
  • Global Geospace Science WIND (GGS, NASA)
  • Deep Space Climate Observatory (DSCOVR, NOAA)
  • Aditya-L1 (ISRO)

These will be joined  if things go well by IMAP in 2025 along with SWFO-L1, NEO Surveyor in 2027. These spacecraft mostly image the Sun, monitor solar wind, image the Earth and its weather patterns, for which this L1 point is rather excellent. Of note here is that strictly taken most of these do not simply linger at the L1 point, but rather follow a Lissajous orbit around said Lagrange point. This particular orbital trajectory was designed to compensate for the instability of the L1-3 points and minimize the need for course corrections.

Moving on, the Sun-Earth L2 point is also rather busy:

  • Gaia space observatory (ESA)
  • Spektr-RG astrophysics observatory (Russian-German)
  • James Webb Space Telescope (JWST, NASA, ESA, CSA)
  • Euclid space telescope (ESA)
  • Chang’e 6 orbiter (CNSA)

Many of the planned spacecraft that should be joining the L2 point are also observatories for a wide range of missions, ranging from general observations in a wide range of spectra to exoplanet and comet hunting.

Despite the distance and hazards of the Sun-Earth L4 and L5 points, these host the Solar TErrestrial RElations Observatory (STEREO) A and B solar observation spacecraft. The OSIRIS-REx and Hayabusa 2 spacecraft have passed through or near one of these points during their missions. The only spacecraft planned to be positioned at one of these points is ESA’s Vigil, which is scheduled to launch by 2031 and will be at L5.

 

Contour plot of the Earth-Moon Lagrange points. (Credit: NASA)
Contour plot of the Earth-Moon Lagrange points. (Credit: NASA)

Only the Moon’s L2 point currently has a number of spacecraft crowding about, with NASA’s THEMIS satellites going through their extended mission observations, alongside the Chinese relay satellite Queqiao-2 which supported the Chang’e 6 sample retrieval mission.

In terms of upcoming spacecraft to join the sparse Moon Lagrange crowd, the Exploration Gateway Platform was a Boeing-proposed lunar space station, but it was discarded in favor of the Lunar Gateway which will be placed in a polar near-rectilinear halo orbit (NRHO) with an orbital period of about 7 days. This means that this space station will cover more of the Moon’s orbit rather than remain stationary. It is intended to be launched in 2027, as part of the NASA Artemis program.

Orbital Mechanics Fun

The best part of orbits is that you have so many to pick from, allowing you to not only pick the ideal spot to idle at if that’s the mission profile, but also to transition between them such as when traveling from the Earth to the Moon with e.g. a trans-lunar injection (TLI) maneuver. This involves a low Earth orbit (LEO) which transitions into a powered, high eccentric orbit which approaches the Moon’s gravitational sphere of influence.

Within this and low-energy transfer alternatives the restricted three-body problem continuously applies, meaning that the calculations for such a transfer have to account for as many variables as possible, while in the knowledge that there is no perfect solution. With our current knowledge level we can only bask in the predictable peace and quiet that are the Lagrange points, if moving away from all those nasty gravity wells like the Voyager spacecraft did is not an option.

]]>
https://hackaday.com/2024/10/09/lagrange-points-and-why-you-want-to-get-stuck-at-them/feed/ 35 725182 Larange A contour plot of the effective potential defined by gravitational and centripetal forces. (Credit: NASA) Contour plot of the Earth-Moon Lagrange points. (Credit: NASA)
Recycling Tough Plastics Into Precursors With Some Smart Catalyst Chemistry https://hackaday.com/2024/10/08/recycling-tough-plastics-into-precursors-with-some-smart-catalyst-chemistry/ https://hackaday.com/2024/10/08/recycling-tough-plastics-into-precursors-with-some-smart-catalyst-chemistry/#comments Tue, 08 Oct 2024 14:00:12 +0000 https://hackaday.com/?p=724586 Plastics are unfortunately so cheap useful that they’ve ended up everywhere. They’re filling our landfills, polluting our rivers, and even infiltrating our food chain as microplastics. As much as we …read more]]>

Plastics are unfortunately so cheap useful that they’ve ended up everywhere. They’re filling our landfills, polluting our rivers, and even infiltrating our food chain as microplastics. As much as we think of plastic as recyclable, too, that’s often not the case—while some plastics like PET (polyethylene terephthalate) are easily reused, others just aren’t.

Indeed, the world currently produces an immense amount of polyethylene and polypropylene waste. These materials are used for everything from plastic bags to milk jugs and for microwavable containers—and it’s all really hard to recycle. However, a team at UC Berkeley might have just figured out how to deal with this problem.

Catalytic

Here’s the thing—polyethylene and polypropylene are not readily biodegradable at present. That means that waste tends to pile up. They’re actually quite tough to deal with in a chemical sense, too. It’s because these polymers have strong carbon-carbon bonds that are simply quite difficult to break. That means it’s very hard to turn them back into their component molecules for reforming. In an ideal world, you can sometimes capture a very clean waste stream of a single type of these plastics and melt and reform them, but generally, the quality of material you get out of this practice is poor. It’s why so many waste plastics get munched up and turned into unglamorous things like benches and rubbish bins.

At Berkley, researchers were hoping to achieve a better result, turning these plastics back into precursor chemicals that could then be used to make fresh new material. The subject of a new paper in Science is a new catalytic process that essentially vaporizes these common plastics, breaking them down into their hydrocarbon building blocks. Basically, they’re turning old plastic back into the raw materials needed to make new plastic. This has the potential to be more than a nifty lab trick—the hope is that it could make it easy to deal with a whole host of difficult-to-recycle waste products.

Combining the plastics in a high-pressure reactor with ethylene gas and the catalyst materials breaks the polymer chains up into component molecules that can be used to make new plastics. Credit: UC Berkeley

The team employed a pair of solid catalysts, which help push along the desired chemical reactions without being consumed in the process. The first catalyst, which consists of sodium on alumina, tackles the tough job of breaking the strong carbon-carbon bonds in the plastic polymers. These materials consists of long chains of molecules, and this catalyst effectively chops them up. This typically leaves a broken link on one of the polymer chain fragments in the form of a reactive carbon-carbon double bond. The second catalyst, tungsten oxide on silica helps that reactive carbon atom pair up with ethylene gas which is streamed through the reaction chamber, producing propylene molecules as a result. As that carbon atom is stripped away, the process routinely leaves behind another double bond on the broken chain ready to react again, until the whole polymer chain has been converted. Depending on the feed plastic, whether it’s polyethylene, polypropylene, or a mixture, the same reaction process will generate propylene and isobutylene as a result. These gases can then be separated out and used as the starting points for making new plastics.

Before and after—the plastic has been converted to gas, leaving the catalytic material behind. Credit: UC Berkeley

What’s particularly impressive is that this method works on both polyethylene and polypropylene—the two heavy hitters in plastic waste—and even on mixtures of the two. Traditional recycling processes struggle with mixed plastics, often requiring tedious and costly sorting. By efficiently handling blends, this new approach sidesteps one of the major hurdles in plastic recycling.

To achieve this conversion in practice is relatively simple. Chunks of waste plastic are sealed in a high-pressure reaction vessel with the catalyst materials and a feed of ethylene gas, with the combination then heated and stirred. The materials react, and the gas left behind is the useful precursor gases for making fresh plastic.

In lab tests, the catalysts converted a near-equal mix of polyethylene and polypropylene into useful gases with an efficiency of almost 90%. That’s a significant leap forward compared to current methods, which often result in lower-value products or require pure streams of a single type of plastic. The process also showed resilience against common impurities and should be able to work with post-consumer materials—i.e. stuff people have thrown away. Additives and small amounts of other plastics didn’t significantly hamper the efficiency, though larger amounts of PET and PVC did pose a problem. However, since recycling facilities already separate out different types of plastics, this isn’t a deal-breaker.

The process can even run efficiently with a mixture of polypropylene and polyethylene. Note that propene is just another word for propylene. Credit: UC Berkeley

One of the most promising aspects of this development is the practicality of scaling it up. The catalysts used are cheaper and more robust than those in previous methods, which relied on expensive, sensitive metals dissolved in liquids. Solid catalysts are more amenable to industrial processes, particularly continuous flow systems that can handle large volumes of material.

Of course, moving from the lab bench to a full-scale industrial process will require further research and investment. The team needs to demonstrate that the process is economically viable and environmentally friendly on a large scale. But the potential benefits are enormous. It could actually make it worthwhile to recycle a whole lot more single-use plastic items, and reduce our need to replace or eliminate them entirely. Anything that cuts plastic waste streams into the environment is a boon, too. Ultimately, there’s still a ways to go, but it’s promising that solutions for these difficult-to-recycle plastics are finally coming on stream.

]]>
https://hackaday.com/2024/10/08/recycling-tough-plastics-into-precursors-with-some-smart-catalyst-chemistry/feed/ 33 724586 Recycle
Polaris Dawn, and the Prudence of a Short Spacewalk https://hackaday.com/2024/10/03/polaris-dawn-and-the-prudence-of-a-short-spacewalk/ https://hackaday.com/2024/10/03/polaris-dawn-and-the-prudence-of-a-short-spacewalk/#comments Thu, 03 Oct 2024 14:00:39 +0000 https://hackaday.com/?p=725168 For months before liftoff, the popular press had been hyping up the fact that the Polaris Dawn mission would include the first-ever private spacewalk. Not only would this be the …read more]]>

For months before liftoff, the popular press had been hyping up the fact that the Polaris Dawn mission would include the first-ever private spacewalk. Not only would this be the first time anyone who wasn’t a professional astronaut would be opening the hatch of their spacecraft and venturing outside, but it would also be the first real-world test of SpaceX’s own extravehicular activity (EVA) suits. Whether you considered it a billionaire’s publicity stunt or an important step forward for commercial spaceflight, one thing was undeniable: when that hatch opened, it was going to be a moment for the history books.

But if you happened to have been watching the live stream of the big event earlier this month, you’d be forgiven for finding the whole thing a bit…abrupt. After years of training and hundreds of millions of dollars spent, crew members Jared Isaacman and Sarah Gillis both spent less than eight minutes outside of the Dragon capsule. Even then, you could argue that calling it a spacewalk would be a bit of a stretch.

Neither crew member ever fully exited the spacecraft, they simply stuck their upper bodies out into space while keeping their legs within the hatch at all times. When it was all said and done, the Dragon’s hatch was locked up tight less than half an hour after it was opened.

Likely, many armchair astronauts watching at home found the whole thing rather anticlimactic. But those who know a bit about the history of human spaceflight probably found themselves unable to move off of the edge of their seat until that hatch locked into place and all crew members were back in their seats.

Flying into space is already one of the most mindbogglingly dangerous activities a human could engage in, but opening the hatch and floating out into the infinite black once you’re out there is even riskier still. Thankfully the Polaris Dawn EVA appeared to go off without a hitch, but not everyone has been so lucky on their first trip outside the capsule.

A High Pressure Situation

The first-ever EVA took place during the Voskhod 2 mission in March of 1965. Through the use of an ingenious inflatable airlock module, cosmonaut Alexei Leonov was able to exit the Voskhod 3KD spacecraft and float freely in space at the end of a 5.35 m (17.6 ft) tether. He attached a camera to the outside of the airlock, providing a visual record of yet another space “first” achieved by the Soviet Union.

This very first EVA had two mission objectives, one of which Leonov had accomplished when he successfully rigged the external camera. The last thing he had to do was turn around and take pictures of the Voskhod spacecraft flying over the Earth — a powerful propaganda image that the USSR was eager to get their hands on. But when he tried to activate his suit’s camera using the trigger mounted to his thigh, he found he couldn’t reach it. It was then that he realized the suit had begun to balloon around him, and that moving his arms and legs was taking greater and greater effort due to the suit’s material stiffening.

After about ten minutes in space Leonov attempted to re-enter the airlock, but to his horror found that the suit had expanded to the point that it would no longer fit into the opening. As he struggled to cram himself into the airlock, his body temperature started to climb. Soon he was sweating profusely, which pooled around his body within the confines of the suit.

Unable to cope with the higher than anticipated internal temperature, the suit’s primitive life support system started to fail, making matters even worse. The runaway conditions in the suit caused his helmet’s visor to fog up, which he had no way to clear as he was now deep into a failure mode that the Soviet engineers had simply not anticipated. Not that they hadn’t provided him with a solution of sorts. Decades later, Leonov would reveal that there was a suicide pill in the helmet that he could have opted to use if need be.

With his core temperature now elevated by several degrees, Leonov was on the verge of heat stroke. His last option was to open a vent in his suit, which would hopefully cause it to deflate enough for him to fit inside the airlock. He noted that the suit was currently at 0.4 atmosphere, and started reducing the pressure. The safety minimum was 0.27 atm, but even at that pressure, he couldn’t fit. It wasn’t until the pressure fell to 0.25 atm that he was able to flex the suit enough to get his body back into the airlock, and from there back into the confines of the spacecraft.

In total, Alexei Leonov spent 12 minutes and 9 seconds in space. But it must have felt like an eternity.

Gemini’s Tricky Hatch

In classic Soviet style, nobody would know about the trouble Leonov ran into during his spacewalk for years. So when American astronaut Ed White was preparing to step out of the Gemini 4 capsule three months later in June of 1965, he believed he really had his work cut out for him. Not only had the Soviets pulled off a perfect EVA, but as far as anyone knew, they had made it look easy.

So it’s not hard to imagine how White must have felt when he pulled the lever to open the hatch on the Gemini spacecraft, only to find it refused to budge. As it so happens, this wasn’t the first time the hatch failed to open. During vacuum chamber testing back on the ground, the hatch had refused to lock because a spring-loaded gear in the mechanism failed to engage properly. Luckily the second astronaut aboard the Gemini capsule, James McDivitt, was present when they had this issue on the ground and knew how the latch mechanism functioned.

Ed White

McDivitt felt confident that he could get the gear to engage and allow White to open the hatch, but was concerned about getting it closed. Failing to open the hatch and calling off the EVA was one thing, but not being able to secure the hatch afterwards meant certain death for the two men. Knowing that Mission Control would almost certainly have told them to abort the EVA if they were informed about the hatch situation, the astronauts decided to go ahead with the attempt.

As he predicted, McDivitt was able to fiddle with the latching mechanism and got the hatch open for White. Although there were some communication issues during the spacewalk due to problems with the voice-operated microphones, the EVA went very well, with White demonstrating a hand-held maneuvering thruster that allowed him to fly around the spacecraft at the end of his tether.

White was having such a good time that he kept making excuses to extend the spacewalk. Finally, after approximately 23 minutes, he begrudgingly returned to the Gemini capsule — informing Mission Control that it was “the saddest moment of my life.”

The hatch had remained open during the EVA, but now that White was strapped back into the capsule, it was time to close it back up. Unfortunately, just as McDivitt feared, the latches wouldn’t engage. To make matters worse, it took White so long to get back into the spacecraft that they were now shadowed by the Earth and working in the dark. Reaching blindly inside the mechanism, White was once again able to coax it into engaging, and the hatch was securely closed.

But there was still a problem. The mission plan called for the astronauts to open the hatch so they could discard unnecessary equipment before attempting to reenter the Earth’s atmosphere. As neither man was willing to risk opening the hatch again, they instead elected to stow everything aboard the capsule for the remainder of the flight.

Overworked, and Underprepared

At this point the Soviet Union and the United States had successfully conducted EVAs, but both had come dangerously close to disaster. Unfortunately, between the secretive nature of the Soviets and the reluctance of the Gemini 4 crew to communicate their issues to Mission Control, NASA administration started to underestimate the difficulties involved.

NASA didn’t even schedule EVAs for the next three Gemini missions, and the ambitious spacewalk planned for Gemini 8 never happened due to the mission being cut short due to technical issues with the spacecraft. It wouldn’t be until Gemini 9A that another human stepped out of their spacecraft.

The plan was for astronaut Gene Cernan to spend an incredible two hours outside of the capsule, during which time he would make his way to the rear of the spacecraft where a prototype Astronaut Maneuvering Unit (AMU) was stored. Once there, Cernan was to disconnect himself from the Gemini tether and don the AMU, which was essentially a small self-contained spacecraft in its own right.

Photo of the Gemini spacecraft taken by Gene Cernan

But as soon as he left the capsule, Cernan reported that his suit had started to swell and that movement was becoming difficult. To make matters worse, there were insufficient handholds installed on the outside of the Gemini spacecraft, making it difficult for him to navigate his away along its exterior. After eventually reaching the AMU and struggling desperately to put it on, Mission Control noted his heart rate had climbed to 180 beats per minute. The flight surgeon was worried he would pass out, so Mission Control asked him to take a break while they debated if he should continue with the AMU demonstration.

At this point Cernan noted that his helmet’s visor had begun to fog up, and just as Alexei Leonov had discovered during his own EVA, the suit had no system to clear it up. The only way he was able to see was by stretching forward and clearing off a small section of the glass by rubbing his nose against it. Realizing the futility of continuing, Commander Thomas Stafford decided not to wait on Mission Control and ordered Cernan to abort the EVA and get back into the spacecraft.

Cernan slowly made his way back to the Gemini’s hatch. The cooling system in his suit had by now been completely overwhelmed, which caused the visor to fog up completely. Effectively blind, Cernan finally arrived at the spacecraft’s hatch, but was too exhausted to continue. Stafford held onto Cernan’s legs while he rested and finally regained the strength to lower himself into the capsule and close the hatch.

When they returned to Earth the next day, a medical examination revealed Cernan had lost 13 pounds (5.8 kg) during his ordeal. The close-call during his spacewalk lead NASA to completely reassess their EVA training and procedures, and the decision was made to limit the workload on all future Gemini spacewalks, as the current air-cooled suit clearly wasn’t suitable for long duration use. It wasn’t until the Apollo program introduced a liquid-cooled suit that American astronauts would spend any significant time working outside of their spacecraft.

The Next Giant Leap

Thanks to the magic of live streaming video, we know that the Polaris Dawn crew was able to complete their brief EVA without incident: no shadowy government cover-ups, cowboy heroics, or near death experiences involved.

With the benefit of improved materials and technology, not to mention the knowledge gained over the hundreds of spacewalks that have been completed since the early days of the Space Race, the first private spacewalk looked almost mundane in comparison to what had come before it.

But there’s still much work to be done. SpaceX needs to perform further tests of their new EVA suit, and will likely want to demonstrate that crew members can actually get work done while outside of the Dragon. So it’s safe to assume that when the next Polaris Dawn mission flies, its crew will do a bit more than just stick their heads out the hatch.

]]>
https://hackaday.com/2024/10/03/polaris-dawn-and-the-prudence-of-a-short-spacewalk/feed/ 40 725168 eva_feat