> The goal of the current maintenance is to fix a lot of long-standing issues with the site. The underlying infrastructure was getting very fragile as technical debt accumulated over time. A team is working very hard right now to make sure that once the site is back up, it's on much better footing and will be solid and reliable for the long term. Despite the unfortunate amount of time this is taking, it will be a major benefit to the site in the long run.
If I were a developer there I would be feeling really not very good. Just minutes of downtime on the systems I’ve worked on gets my heart rate going.
It also feels like there’s a lot being left unsaid in this statement. Normally you would work on these things in parallel to production… so something is seriously wrong.
The scenarios I have taken extended downtime for. When an OLTP's DB needed a serious overhaul for some reason and it was cheaper for rollout to plan operational downtime than risk loosing data or inconsistent transactions. Generational platform migration to complete system rewrites (something I am generally against, but that is its own soapbox). Migrating from on-prem to cloud infra, which required design changes. In all cases data integrity/consistency is the critical aspect. Migrating from one db technology to another (MySQL -> PostgreSQL).
In all those cases there is serious planning done before the migration, checklists, trial runs/validations, and validation procedures day off. If something isn't working, the leadership group evaluates the the issue and determines rollback vs go forward. Rollback needs to also be planned for, and your planned downtime window should be considered.
I agree with you, this wording implies they are making changes after this change. This could've been bad planning, a bad call day off, etc.
In one scenario, we _had_ to go forward while resolving several blockers on the fly. We had planned ahead of time developer rotation shifts. Pulling people off the line after 8-12hrs. At some point, you aren't thinking clearly understress. Don't know how big the team is over there is, but I hope they are pacing themselves, during what I am sure is a horrible moment of crisis to them.
My advice to them is, consider a roll back if needed/possible. Split responsibility between who is managing the process and dealing with specific problems. Focus on MVP. Don't try to _fix_ and replace at the same time, if something was broken before business wise, log it in your bug tracker and deal with it later. Pull people away if needed to get rest. Get upper management away from people doing the work, have them only talking to the group handling the process management.
Edit: I am also making a good faith assumption that this is planned and not an emergency response, either way, it doesn't change my general advice.
Conversely, if this is indeed true motivation and management has accepted it, kudos to them. It sounds like the engineers said that the situation is untenable and this is the cover we need to fix it, and they got what they asked for.
I don't know, it just doesn't feel very scheduled to me.
> I'm about to loose thousands of dollars by the end of Monday 20th because of the automatic shipping deadline on Tindie and it currently being down. I've tried contacting support multiple times but they are not helping. Please respond before my business fails!
You cannot build a physical store in parallel to the current one and swap them in place once done. Here the issue is not that it's down for several days, it's that there is no reason given for something so unusual
The maker people I know have been migrating away from Tindie because it has felt like a sinking ship for a long time.
I really like the idea of Tindie so I hope they can succeed. I don’t understand what sequence of events led to this being such a large problem that they can’t even keep their site online. The post says something vague about the engineering team is hoping the migration work is close to finished, but it’s been years since I remember any engineering team knocking out the entire site for days without being able to restore it during a failed migration. Are they outsourcing dev work to the type of agency that bills by the hour and perpetually churns low hourly cost work to make their money in volume fixing their own code?
Shopify, etsy, crowdsupply, a custom website. All have their problems, i’m not endorsing. I sell on tindie. Well, i don’t sell much there, but i list on tindie. Most of my sales come
thru my own store site.
that just resolves back to the original problem that Tindie solved, discoverability.
It's like saying people are fleeing ebay for Shopify. Yeah, I guess -- but that only really solves the merchant sales problem.
I buy from indie elec shops directly when I can, but the problem is that I commonly discover those shops thru tindie. Word of mouth/discord/etc isn't nearly as a great a tool as a searchable refreshing index.
For myself at least, discoverability is a huge thing for tindie. I'll go there for something specific and pretty much every single time just poke around until I find something else too. It's kind of like shopping for clothes - I want a new shirt, but some fancy new pants can't hurt.
It can be as simple as a terraform apply wiping out huge swaths of the backend infra, getting that back, depending on how disciplined you are, can take in the order of days/weeks.
Several sites are like this. Weather Underground has been in the process of migrating to a newer code base any day now since IBM bought them ten years ago, and now that they're private-equity owned I expect even more neglect than the two part-time interns IBM had maintaining it. Pity because it's a great site, it's just been running on life-support for a decade or more.
Another one is the TradeMe site in New Zealand, notable for being one of the few countries where eBay isn't dominant. This is from DailyWTF, they've been "doing some upgrades" for forever, every time you hit one of their countless bugs you get a note that it's due to the upgrades that have been going on now since the place got taken over by.. whaddya know, another private-equity firm.
This would indicate wherever they were hosting their site on no longer exists. 503's even on pages that should mostly be static suggest the backend no longer exists, or whatever ingress they're using in front of it disappeared. As far as I can tell every single page on their site is 503'ing.
Speculating wildly, they got pwned and are having to rebuild the site from barely-existent out-of-date backups? As others have pointed out, a failed migration doesn't take you out for this long, you should be able to roll back unless your migration plan was "reformat and reinstall".
They are putting out a lot of stuff that to me is very obvious to read between the lines what led to this because I've been brought in to clean messes like this before:
>The goal of the current maintenance is to fix a lot of long-standing issues with the site. The underlying infrastructure was getting very fragile as technical debt accumulated over time. A team is working very hard right now to make sure that once the site is back up, it's on much better footing and will be solid and reliable for the long term. Despite the unfortunate amount of time this is taking, it will be a major benefit to the site in the long run.
They are saying it was "spring cleaning" or a migration that took out the site for days. "infrastructure getting very fragile" reeks of bad or nonexistent ops practices, probably very little or unreliable IAC (if any, I've seen shops get by for 10+ years by just clicking things in console, til unfortunately it gets to this point).
This though, rubs me the wrong way:
> We want to offer a much better quality of service going foward. We understand that the lack of communication has been frustrating, and I have been closely watching social media and reporting the community's feelings up the chain, so your voices are being heard. The plan was not to have a long outage like this, but due to factors beyond the dev team's control, things have taken much longer than anticipated. Please be patient with us - I will keep updating here and on our other social media.
"Factors beyond the dev teams control." Sorry, no. If you have an ops team, you don't get to toss blame over the wall like that, and if you don't, you have no one to blame but yourselves. I feel bad for whoever the unofficial official ops dude is right now. These kind of infrastructure "tech debt" woopsies come from years of people just not giving a crap to doing things properly, it's never seen as important until it suddenly is. Hope they learn a lesson and hire an infrastructure guy properly. There's long been a persistent delusion in the pure dev world that they should be able to be completely agnostic to the hardware lying underneath their beautiful code - ideally yes, in practice almost never, unless you come from a place that has the significant resources to make something nice like that, or are willing to pay out the azz for managed cloud services or licenses.
It is entirely possible, especially in small companies in my experience, that “factors beyond the dev teams control” means “technical founder with severe myopia and decision fatigue who prevents “complexity”” as they see it, which for them means everything you discuss here as being necessary.
Unfortunate. Tindie is (was?) a pretty unique marketplace. Amusingly, a lot of what they were selling was probably illegal due to FCC rules: for the most part, you can't sell electronics without EMI certification and "I'm just a hobbyist" is not an excuse. Kits get a bit of leeway, but finished products don't.
Before the tariffs, I noticed that Chinese companies were trying to undercut them. I've gotten multiple mails asking me to start selling my designs with China-based outlets: they would make the PCBs, assemble them, and pay me some money for every item sold.
Can you share more information about the undercutting? I've heard of places like Elecrow trying to incentivize people to sell via their platform/OEM service but it sounds like you've had people asking you to license your designs?
I never followed up, but I didn't read it as some serious IP licensing thing. It sounded like they've come to the conclusion that they're making the stuff that's sold on Tindie anyway, so might as well set up a website and ship directly to your customers.
It's not likely, but if you're an expert I'm sure you could think of a few ways it would be possible. The reason we give people with pacemakers a list of machines to avoid is definitely not to waste their time because there is no possible way any of those things could be dangerous to them.
As an RF expert I can assure you that I could create a device to wirelessly interfere with a pacemaker. A pathological one, maybe, but the point remains: regulation is needed.
The question is whether such interference could be created by a device as a by-product of its normal operation, not by a weapon that's intended to cause harm.
About Sunday/Monday last week right before it went down I noticed the site was supper buggy and failing to add things to cart, I emailed support and got a "we are checking the issue". Since it went down all I've heard from support is "Please be patient. Tindie will be back up soon as we are currently performing maintenance. At this time, we do not have an estimated timeframe to provide."
The fact that it wasn't communicated at all prior and not having a timeframe makes me thing this was probably an ops screw up.
I see this a lot with small independent sites with big userbases. Instead of being honest, they hide mistakes behind maintenance or blame it on hackers.
There are a number of things on Tindie that I have been unable to find anywhere else at any price. (Mostly small batch bespoke electronics.) I hope they figure this out.
The site has been on life support for a decade, ownership has changed hands a few times, basic features promised 10 years ago never shipped, API is half implemented (eg. you can download an order but you cannot mark it shipped), and they still have no mechanism to collect state sales tax nor will they submit a 1090 as required by US tax law. I jumped ship 5 years ago when this became too much of a problem and not a single thing has changed in those 5 years.
Tindie was a great place for a hacker to sell a few widgets back in the day, but legal requirements have changed since then but Tindie has not changed a line of code in at least 10 years.
Concerning, a professional development team should have been able to manage this switch with minimal to no downtime. Makes me wonder what other mistakes they're making.
I'm reluctant to trust my payment information with them in the future.
So many fairly popular apps, SaaS, etc are on skeleton crew staffing-levels. It'll probably get worse with vibe coding. Though then they'll probably launch Claude Ops, etc now that I think about it.
It's like Etsy for small-scale electronics - if you build a cool, niche electronic device as an individual, Tindie is a marketplace to sell in low volume (possibly as a kit).
Blizzard still brings World of Warcraft down every Tuesday for maintenance. It's down right now to apply a new content patch, which they estimated would take 8 hours.
Yeah this sucks, I have a bunch of hobbyist orders stuck in limbo since last week -- customers have paid, but I can't pull the orders down even through the API.
I really like Tindie as a platform and have been using it since nearly the beginning...but I'd have lost the contract if I pulled this level of nonsense on a customer's production application.
Glad I used a privacy.com burner when I bought from them. Quite a while later I found a declined purchased for pizza on the now long-deactivated burner card I used to purchase through them.
> The goal of the current maintenance is to fix a lot of long-standing issues with the site. The underlying infrastructure was getting very fragile as technical debt accumulated over time. A team is working very hard right now to make sure that once the site is back up, it's on much better footing and will be solid and reliable for the long term. Despite the unfortunate amount of time this is taking, it will be a major benefit to the site in the long run.
If I were a developer there I would be feeling really not very good. Just minutes of downtime on the systems I’ve worked on gets my heart rate going.
It also feels like there’s a lot being left unsaid in this statement. Normally you would work on these things in parallel to production… so something is seriously wrong.
The scenarios I have taken extended downtime for. When an OLTP's DB needed a serious overhaul for some reason and it was cheaper for rollout to plan operational downtime than risk loosing data or inconsistent transactions. Generational platform migration to complete system rewrites (something I am generally against, but that is its own soapbox). Migrating from on-prem to cloud infra, which required design changes. In all cases data integrity/consistency is the critical aspect. Migrating from one db technology to another (MySQL -> PostgreSQL).
In all those cases there is serious planning done before the migration, checklists, trial runs/validations, and validation procedures day off. If something isn't working, the leadership group evaluates the the issue and determines rollback vs go forward. Rollback needs to also be planned for, and your planned downtime window should be considered.
I agree with you, this wording implies they are making changes after this change. This could've been bad planning, a bad call day off, etc.
In one scenario, we _had_ to go forward while resolving several blockers on the fly. We had planned ahead of time developer rotation shifts. Pulling people off the line after 8-12hrs. At some point, you aren't thinking clearly understress. Don't know how big the team is over there is, but I hope they are pacing themselves, during what I am sure is a horrible moment of crisis to them.
My advice to them is, consider a roll back if needed/possible. Split responsibility between who is managing the process and dealing with specific problems. Focus on MVP. Don't try to _fix_ and replace at the same time, if something was broken before business wise, log it in your bug tracker and deal with it later. Pull people away if needed to get rest. Get upper management away from people doing the work, have them only talking to the group handling the process management.
Edit: I am also making a good faith assumption that this is planned and not an emergency response, either way, it doesn't change my general advice.
My money's on "entire site was hosted on a single box, which just up and died, wiping out a decade's worth of monkeypatches.".
Conversely, if this is indeed true motivation and management has accepted it, kudos to them. It sounds like the engineers said that the situation is untenable and this is the cover we need to fix it, and they got what they asked for.
I don't know, it just doesn't feel very scheduled to me.
> I'm about to loose thousands of dollars by the end of Monday 20th because of the automatic shipping deadline on Tindie and it currently being down. I've tried contacting support multiple times but they are not helping. Please respond before my business fails!
https://mastodon.social/@thereminhero/116432503640568650
Right? Retail stores close for a few days for renovations and nobody has a heart attack.
You cannot build a physical store in parallel to the current one and swap them in place once done. Here the issue is not that it's down for several days, it's that there is no reason given for something so unusual
Yeah but they HAVE to be finished on time because otherwise the supermarket manager will have a heart attack.
I don't have the links handy but I believe there are some comments from staff on social media that give more details.
Edit: https://hackaday.social/@tindie/116427447318102919
https://hackaday.social/@tindie/116436988752373293
The maker people I know have been migrating away from Tindie because it has felt like a sinking ship for a long time.
I really like the idea of Tindie so I hope they can succeed. I don’t understand what sequence of events led to this being such a large problem that they can’t even keep their site online. The post says something vague about the engineering team is hoping the migration work is close to finished, but it’s been years since I remember any engineering team knocking out the entire site for days without being able to restore it during a failed migration. Are they outsourcing dev work to the type of agency that bills by the hour and perpetually churns low hourly cost work to make their money in volume fixing their own code?
> The maker people I know have been migrating away from Tindie
To what? The only alternative I know of is Lectronz.
Shopify, etsy, crowdsupply, a custom website. All have their problems, i’m not endorsing. I sell on tindie. Well, i don’t sell much there, but i list on tindie. Most of my sales come thru my own store site.
that just resolves back to the original problem that Tindie solved, discoverability.
It's like saying people are fleeing ebay for Shopify. Yeah, I guess -- but that only really solves the merchant sales problem.
I buy from indie elec shops directly when I can, but the problem is that I commonly discover those shops thru tindie. Word of mouth/discord/etc isn't nearly as a great a tool as a searchable refreshing index.
For myself at least, discoverability is a huge thing for tindie. I'll go there for something specific and pretty much every single time just poke around until I find something else too. It's kind of like shopping for clothes - I want a new shirt, but some fancy new pants can't hurt.
The EEVBlog folks have said good things about Elecrow, https://www.eevblog.com/forum/manufacture/tindie-down/msg624...
It can be as simple as a terraform apply wiping out huge swaths of the backend infra, getting that back, depending on how disciplined you are, can take in the order of days/weeks.
Several sites are like this. Weather Underground has been in the process of migrating to a newer code base any day now since IBM bought them ten years ago, and now that they're private-equity owned I expect even more neglect than the two part-time interns IBM had maintaining it. Pity because it's a great site, it's just been running on life-support for a decade or more.
Another one is the TradeMe site in New Zealand, notable for being one of the few countries where eBay isn't dominant. This is from DailyWTF, they've been "doing some upgrades" for forever, every time you hit one of their countless bugs you get a note that it's due to the upgrades that have been going on now since the place got taken over by.. whaddya know, another private-equity firm.
You have to wonder why it's so hard to put that on their 503 error page. I suspect something's much more broken than they're letting on.
This would indicate wherever they were hosting their site on no longer exists. 503's even on pages that should mostly be static suggest the backend no longer exists, or whatever ingress they're using in front of it disappeared. As far as I can tell every single page on their site is 503'ing.
Example of a response I see:
< x-cache: Error from cloudfront < via: 1.1 bdf85d6d4811ab08c57841855a848f8a.cloudfront.net (CloudFront) < x-amz-cf-pop: LAX54-P11 < x-amz-cf-id: nTQ-y1Ut3F-04jUCDM09ordCtj0CMkVmmtZTe__BtzEr1sMJu7rKaw== < age: 76773
Speculating wildly, they got pwned and are having to rebuild the site from barely-existent out-of-date backups? As others have pointed out, a failed migration doesn't take you out for this long, you should be able to roll back unless your migration plan was "reformat and reinstall".
They are putting out a lot of stuff that to me is very obvious to read between the lines what led to this because I've been brought in to clean messes like this before:
>The goal of the current maintenance is to fix a lot of long-standing issues with the site. The underlying infrastructure was getting very fragile as technical debt accumulated over time. A team is working very hard right now to make sure that once the site is back up, it's on much better footing and will be solid and reliable for the long term. Despite the unfortunate amount of time this is taking, it will be a major benefit to the site in the long run.
They are saying it was "spring cleaning" or a migration that took out the site for days. "infrastructure getting very fragile" reeks of bad or nonexistent ops practices, probably very little or unreliable IAC (if any, I've seen shops get by for 10+ years by just clicking things in console, til unfortunately it gets to this point).
This though, rubs me the wrong way:
> We want to offer a much better quality of service going foward. We understand that the lack of communication has been frustrating, and I have been closely watching social media and reporting the community's feelings up the chain, so your voices are being heard. The plan was not to have a long outage like this, but due to factors beyond the dev team's control, things have taken much longer than anticipated. Please be patient with us - I will keep updating here and on our other social media.
"Factors beyond the dev teams control." Sorry, no. If you have an ops team, you don't get to toss blame over the wall like that, and if you don't, you have no one to blame but yourselves. I feel bad for whoever the unofficial official ops dude is right now. These kind of infrastructure "tech debt" woopsies come from years of people just not giving a crap to doing things properly, it's never seen as important until it suddenly is. Hope they learn a lesson and hire an infrastructure guy properly. There's long been a persistent delusion in the pure dev world that they should be able to be completely agnostic to the hardware lying underneath their beautiful code - ideally yes, in practice almost never, unless you come from a place that has the significant resources to make something nice like that, or are willing to pay out the azz for managed cloud services or licenses.
It is entirely possible, especially in small companies in my experience, that “factors beyond the dev teams control” means “technical founder with severe myopia and decision fatigue who prevents “complexity”” as they see it, which for them means everything you discuss here as being necessary.
100% agree and seen this exact scenario play out
I didn't take "the dev team" to exclude ops. Ops folks are usually devs, too.
Often, but there are a lot of shops that make them entirely separate silo'd teams, and the symptoms are usually what I am describing here.
Most ops guys can do dev, the inverse is absolutely not true IME.
How big of an operation is Tindie? Founder plus one other dev/ops/everything else guy?
Unfortunate. Tindie is (was?) a pretty unique marketplace. Amusingly, a lot of what they were selling was probably illegal due to FCC rules: for the most part, you can't sell electronics without EMI certification and "I'm just a hobbyist" is not an excuse. Kits get a bit of leeway, but finished products don't.
Before the tariffs, I noticed that Chinese companies were trying to undercut them. I've gotten multiple mails asking me to start selling my designs with China-based outlets: they would make the PCBs, assemble them, and pay me some money for every item sold.
Can you share more information about the undercutting? I've heard of places like Elecrow trying to incentivize people to sell via their platform/OEM service but it sounds like you've had people asking you to license your designs?
I never followed up, but I didn't read it as some serious IP licensing thing. It sounded like they've come to the conclusion that they're making the stuff that's sold on Tindie anyway, so might as well set up a website and ship directly to your customers.
Free market is a good thing.
It's good until some unregulated electronic device creates interference that makes some poor guys pacemaker act up and kills them.
As a RF expert, I can assure you that is not possible. And basic common sense should tell you why.
It's AM radio that gets interfered with.
It's not likely, but if you're an expert I'm sure you could think of a few ways it would be possible. The reason we give people with pacemakers a list of machines to avoid is definitely not to waste their time because there is no possible way any of those things could be dangerous to them.
I mean, more or less, we do. The NIH list includes cell phones, e-cigarettes, and headphones.
As an RF expert I can assure you that I could create a device to wirelessly interfere with a pacemaker. A pathological one, maybe, but the point remains: regulation is needed.
The question is whether such interference could be created by a device as a by-product of its normal operation, not by a weapon that's intended to cause harm.
Blind dogma is rarely a good thing. A free market is not a virtue or end goal in itself, but a means to other ends.
Every freedom has limits
About Sunday/Monday last week right before it went down I noticed the site was supper buggy and failing to add things to cart, I emailed support and got a "we are checking the issue". Since it went down all I've heard from support is "Please be patient. Tindie will be back up soon as we are currently performing maintenance. At this time, we do not have an estimated timeframe to provide."
The fact that it wasn't communicated at all prior and not having a timeframe makes me thing this was probably an ops screw up.
I see this a lot with small independent sites with big userbases. Instead of being honest, they hide mistakes behind maintenance or blame it on hackers.
There are a number of things on Tindie that I have been unable to find anywhere else at any price. (Mostly small batch bespoke electronics.) I hope they figure this out.
As much as people want to be angry about this happening, the value of the thing to the maker community is too great. I hope they can figure this out.
I've bought some cool stuff off Tindie. My latest purchase was this set of earrings that alert when you're near a Flock camera
https://colonelpanic.tech/#products
This really tickled me, I wasn't expecting them to just be a pair of esp32 dev boards you attach to your ears
The site has been on life support for a decade, ownership has changed hands a few times, basic features promised 10 years ago never shipped, API is half implemented (eg. you can download an order but you cannot mark it shipped), and they still have no mechanism to collect state sales tax nor will they submit a 1090 as required by US tax law. I jumped ship 5 years ago when this became too much of a problem and not a single thing has changed in those 5 years.
Tindie was a great place for a hacker to sell a few widgets back in the day, but legal requirements have changed since then but Tindie has not changed a line of code in at least 10 years.
If you didn’t inform people ahead of time, it’s probably not “scheduled”…
Scheduledn't maintenance.
Concerning, a professional development team should have been able to manage this switch with minimal to no downtime. Makes me wonder what other mistakes they're making. I'm reluctant to trust my payment information with them in the future.
Not everyone has seamless blue/green deployment.
However, any downtime over an hour or two screams "migration gone wrong" to me.
Otherwise wouldn't you just roll back to get the site up to come back at it and try again later?
In this day and age all it takes is one person who knows what they're doing.
That means they've got zero people who know what they're doing.
So many fairly popular apps, SaaS, etc are on skeleton crew staffing-levels. It'll probably get worse with vibe coding. Though then they'll probably launch Claude Ops, etc now that I think about it.
Who is Tindie?
It's like Etsy for small-scale electronics - if you build a cool, niche electronic device as an individual, Tindie is a marketplace to sell in low volume (possibly as a kit).
Tinder for indie (hardware) devs and their customers. I.e. a webshop for indie devs who sell small series of niche hardware.
Scheduled maintenance in 2026 is insane
The biggest that comes to mind would be Steam.
Blizzard still brings World of Warcraft down every Tuesday for maintenance. It's down right now to apply a new content patch, which they estimated would take 8 hours.
https://us.support.blizzard.com/en/help/article/358479
I wonder if someone found an exploit of some sort and they are figuring out how to prevent it?
Either that or catastrophic data issues?
Otherwise so much downtime at once is pretty crazy
They must have really bungled something if they can't roll back and get the site operational again.
Yeah this sucks, I have a bunch of hobbyist orders stuck in limbo since last week -- customers have paid, but I can't pull the orders down even through the API.
I really like Tindie as a platform and have been using it since nearly the beginning...but I'd have lost the contract if I pulled this level of nonsense on a customer's production application.
:( I really like Tindie and what they're doing
Glad I used a privacy.com burner when I bought from them. Quite a while later I found a declined purchased for pizza on the now long-deactivated burner card I used to purchase through them.