I checked out the Wired article. Not recommended seems to mostly be about the fact that it's a tiny display, which some people (like the reporter) will have trouble to read. That the screen degraded didn't help of course. The reporter doesn't want to use a web dashboard to check out the readings on his indoor air monitor. I think that's a fair comment, maybe just a bit harsh to put it in the not recommended bucket. I understand that this can affect sales quite a bit for a small supplier like AirGradient.
Buy display, it’s super tiny, and then it breaks after a couple months.
LEDs mean something, you can program them, or not.
A screen I can read from across the room that changes would be nice. Not interested in running a server or going to a webpage or app to see my indoor air quality.
That’s not the dichotomy. You do get a display. The reviewer’s didn't work and instead of
waiting for the replacement unit they just published a “not recommended” and told AirGradient “sorry I don’t get paid enough to give your product a fair shot”.
The problem with that "no display" example is that the monitor isn't failing to do a thing they're trying to do, which AirGradient's did in the reviewer's perspective.
It's not a failure that the one without a display doesn't have a display. It's a design choice. The AirGradient unit has a display, but it's tiny and hard to read. Scrolling through the article, all the other units with displays have much larger and more readable text. You can read the biggest data points from across a room. The AirGradient has a display, but it fails to be a good display, hence the reviewer's perspective - it's not living up to its goals.
• Product A has limited features but does them well. If the customer is okay with the features the product has, the reviewer can recommend it for this customer.
• Product B has more features but is impacted by QA issues as well as product design decisions that make those features harder to use. This impacts the ability of the customer to use features they might've paid for to use the product, and it may even impact their ability to use features core to other products. This potentially makes Product B less desirable for comparable use cases.
With this in mind, I'm inclined to agree with Wired's decision.
I don't agree with this kind of thinking. You have to determine what exactly what you want, anything else is just "nice to have". If you want an air monitor and don't care about screen, having screen is just a "nice to have" and should not affect your experience. And if you do care about the screen, you should also remove all no-screen air monitors from the list.
The screen is the “how,” not the “what.” An item is designed to function with the parts it has. A unit designed without a screen is doing the job another way—say, with a row of LEDs or something. I care whether I can find out what the air quality is, I don’t care whether or not I do that using a screen. It seems to me like relying on a finicky screen was a poor design decision.
It also raises my eyebrows that they see “repairability [as] one of our core differentiators.” It’s cool to make that possible quietly for people who are into it, but would you want a “repairable” smoke detector? Or one that just works? If it broke, would you want them to send you one that’s not broken, or parts and a booklet of repair instructions?
Do that enough and then the category becomes irrelevant. Every product is a unique snowflake owing to some perfect combination of features (has screen, 30cm3 volume, 19.3 db loud, etc.
If I pay for X, I will be mad if I can only use X-1.
I have one of these. The unit has bright LEDs that display the levels at a glance. Key point being that you can actually program the LEDs to report the parameter you're most interested in as well which is great. The LEDs are part of the "display" as well.
There are three outputs. LEDs that go from green to yellow or red. The small display. A webpage dashboard. Or you can plug the data into HA for whatever you want.
The only issue I have with the display is that it’s monochrome and that prevents making data easy to read the trends, by showing positive changes as green or negative ones as red.
If the display is too small the LEDs are easily visible for quick information and then the dashboard is for more data.
Reviewers often have their issues really understanding how people use products, often because rapidly changing things to review, doesn’t allow them the time to truly use and understand a product.
> Reviewers often have their issues really understanding how people use products, often because rapidly changing things to review, doesn’t allow them the time to truly use and understand a product.
The reviewer states:
I’ve been using AirVisual Pros for the past five years.
so it's not like they're new to the field. They know what they want out of the product they're reviewing. That may not be what someone reading the review may be after, but that doesn't invalidate the review.
I'm not sure this contradicts GP's point - this reinforces that the author is comfortable using a different product (AirVisual Pro) and may therefore, almost paradoxically, struggle more with a product that displays data differently than someone who has never used either product.
To draw a parallel, I think an iPhone user may have a harder time using Android than someone who has never used either phone.
Yes, time to adjust and familiarize oneself with a new product is often too short. Let alone adopting new sensors into a smart home ecosystem and fully setting everything up the way you want, rather than just the default settings, etc.
I get a new laptop and phone and generally dislike it, because it's not what I am familiar with and it's not yet setup just the way I like it.
And then there is the fact, that the reviewers favoured product has a logo on the product page for the reviewers publication.
There is certainly potential for financial interests to impact reviews.
That, and many reviewers are given the "here's 20 products, I need an article hitting these buzzwords in X Weeks" task.
You can't just do that and get in quality testing time with more than one or two products.
Reviewing things fairly and helpfully is hard and takes time, and especially as AI slop takes over writing (thankfully it looks like this article at least has a byline), I think it's going to be harder and harder to find actual useful human reviews to guide decisions.
Come on, these are air quality measuring devices. You set them all up, set their panels next to each other for the duration of the testing, and then you evaluate their performances. It doesn't take much time. You look at them when you take readings at whatever interval to compare accuracy. You glance at them to see readability. It doesn't require a lot of effort.
This is quite different from being tasked with comparing bicycles which would require a lot of effort to give equal time to each one. Unless the journo was a world class rider, I'd be shocked if they rode any one of them for more than 5 minutes.
Applied inversely to bikes: Come on, these are bikes. You get on the bike, you pedal around, it doesn't take much time.
These devices usually have between 3-8 sensors inside (with wildly varying quality and quality control), run firmware that _usually_ has access to your WiFi or requires an app to run on your phone (security implications), and are meant to exist in your home for years at a time.
Good reviews which consider all those aspects take time and effort, even for simpler devices.
If you are actually in good faith comparing the zero energy collecting of values from a set of monitors to the physical exertion of riding a bicycle as the same thing, then I just cannot have a conversation with someone that is deliberately being that obtuse.
Why do you believe that collecting data, collating it into useful information and making conclusions from that information is "zero energy"? Yes, testing bicycles will require more calories and exertion, but that doesn't mean that testing computers, sensors, or other technological devices is a zero energy effort.
This seems like a big assumption to make since that isn't the reasoning presented in the review.
> I understand why I need to check a dashboard for an outdoor air quality monitor, but having to check a dashboard for an indoor monitor seemed like an extra unnecessary step.
This is after already mentioning the unit also has led light bars to display quality without reading the number.
The reviewer seems to be saying that just lights and a web dashboard isn't enough for and indoor monitor.
Yet earlier in the article the Author picked the "Touch Indoor" as the unit with the best display, even though that is an indoor unit with no screen and only led lights.
Given that, you'd think the AirGradient unit's lights would be compared to say why they are worse, but that doesn't happen.
Having read the wired reviews, they set off my internal alarms for "low quality reviewer" who doesn't display a deep understanding of the products being reviewed or the market segment. There's a lot of fluff and stuff about screen size and very little digging into which actual accuracy and functionality.
That said, I haven't seen any good reviews from wired in a log time.
The logic is more like “this is the only company that shipped us a broken review unit.”
The transparency of this company is nice but you can’t control what other people think about your products based on their experiences. It isn’t really “unfair” at all, at least not 100% unfair. They are essentially upset that the press is allowed to have an independent opinion.
If I buy a laptop and the screen is broken (warranty issue!) that’s still a lot worse of an experience than a desktop PC that has everything working. The excuse that a desktop PC doesn’t include a screen isn’t relevant, the idea is that the competition shipped a fully functioning product.
Looks like the AirGradient went up against a similar device with a huge colour screen that's easy to read. No wonder they preferred that, although I'm not sure if the $100 price difference is worth it for other people.
The AirGradient screen isn't even that small, but the UI can be much more user-friendly IMO. There's a reason all the other meters with screens do HUGE NUMBER+tiny label.
I'm sure many people will prefer the AirGradient, but I don't think the reviewer is wrong for having different preferences.
I don't think any of this review is unfair. Shipping a broken product and then covering the fix with your warranty is better than shipping a broken product and telling the consumer to get bent, but worse than shipping a working product. A product that advertises fewer features and delivers them is better than a product that advertises more features and doesn't deliver them. Even in the case of the CO2 sensor, it may be your opinion that a CO2 sensor is critical and can't be done without, but the review is for the feature set at the price point. A device that does fewer things can be a better device depending on price point and, again, reliability.
If, however, I concede the author's idea that reviews must have objective criteria, methodology and standards in order to be taken seriously then I'd like to propose the first objective criterion: broken out-of-the-box === not recommended.
edit: evidently the device failed after a few months. This doesn't change my final opinion, which is in total agreement with the review, but it deserves to be mentioned because I was incorrect in my facts. For my fellow JS devs, I'm standing by broken out-of-the-box === not recommended, and adding broken within a few months of installing == not recommended.
That is still not good, devices tend to follow the bathtub curve, so if it took only a few months for it to fail, they either got really unlucky OR the device is highly unreliable.
I don't have hard data, but I own 3 air gradients and they've been going strong for years. I don't think it's unreasonable to think they just got unlucky
then let me ask you: as an ethical person trying to write a review, how do you handle that situation? It seems like that's an angle to this that we're not exploring, and that's whether the review is epistemologically justified rather than whether it's objectively correct. The way I see it, as a reviewer I get a product that fails well sooner than I expected it to and I have three choices:
1) Don't report the failure
2) Report the failure
3) Report the failure but try to contextualize it (basically, trying to solve for sure whether they got unlucky or not)
1 is obviously unethical, and 3 seems like it's well outside the scope of a reviewer's duty (and could be seen as carrying water for a particular brand. after all, do you think the person who wrote the OP would be okay with it if his product's failure was considered typical but another product's failure was determined to be atypical, regardless of the truth?). The only ethical approach is to report what happened, and not speculate as to cause.
Do what Linus Tech Tips has done (when they're doing it right): report that they got the device, tested it for a while, and it broke, so what they currently know about it is not the full picture, and that they might revisit the product when they have confirmed that they've worked out the kinks with the device in question.
That feels like exactly what the article did. They disclosed that the thing broke and on what time frame, alongside details about the degradation of functionality. As far as say that this isn't the full picture, is it definitely not? If we owe someone the benefit of the doubt if their device breaks while being reviewed do we owe the reader the deficit of the doubt if it doesn't break? What if they just got lucky?
That points to a lack of QA on your part and, I think, it is fair for a reviewer to point out.
Even if you have an exemplary warranty process and easy instructions, that's still a hassle. Not everyone has the confidence or the time to repair simple things.
As for the objective/subjective nature of reviews. Are your customers buying air monitors for their 100% precision or for "entertainment" purposes / lifestyle factors?
I have a cheap Awair air monitor. I have no idea if it is accurate - but it charges by USB-C and has an inconspicuous display. That's what I wanted it for.
It is perfectly fair for a reviewer to point out their personal preferences on something like this. They aren't a government testing lab.
If you sell a physical thing, some percentage of them will have defects. That's just a fact of manufacturing.
It seems unfair to move to "not recommended" due to a single instance of a hardware failure, especially if the manufacturer made it right. And repair-ability is one of their core values!
At most this should've triggered a "this happened to me, keep an eye out if this seems to be a thing." note in the review instead of moving to not recommended.
If you got food poisoning from a restaurant, would you recommend it to your friends? After all, food-borne pathogens and poor hygiene are just a fact of life.
How about if they gave you a voucher for a free drink to say sorry?
Reviewing products is like interviewing people. You have to go by what you see on the day. Your can't review (or interview) based in what could have happened; only on what did.
Yes, if it happened once. If I get food poisoning every time, probably not. Perfection is impossible, I am reasonable and mindful of the challenges of consistency.
Hardware device arrives damaged or non functional? I’m just going to call and ask for another one. If it’s a critical need (I cannot wait for a return and delivery cycle), I’m buying more than one upfront. Spares go in inventory.
> If you got food poisoning from a restaurant, would you recommend it to your friends? After all, food-borne pathogens and poor hygiene are just a fact of life.
There are standard practices that avoid the vast majority of food poisoning. Poor hygiene is not a fact of life, it's a failure of process in a restaurant.
There are no known standard practices to avoid all faulty electronics at anything like a reasonable price. From the sounds of it, this unit worked initially but failed over time. That's what warranties are for, this is why they exist. As a society we've decided that it's kind of okay if _some_ products fail early, as long as the companies make it right when they do. Which it doesn't sound like the company had any lack of intention in doing that here.
There is no corresponding societal understanding for your analogy.
They shipped a working product that failed after several months, was covered under warranty and also repairavle at home (which the review doesn't even mention.)
> It is perfectly fair for a reviewer to point out their personal preferences on something like this.
But that isn't what happened. The product was included as "not recommended" not just "this wasn't my favorite due to X but would be good for this type of person".
They make up a category to list every other unit as "best for" but decided that nobody should want to buy this one because the Author got annoyed.
The review is poorly written and doesn't do a good job of actually comparing units. It is the kind of review article that mostly fluff with very few details about actual differences revealed during testing that I have learned to ignore when looking for information about what to buy.
A few thoughts from me on the discussion so far, which I find incredibly insightful. Many thanks to everyone sharing their perspectives. I truly appreciate it.
On subjective reviews:
I think there's absolutely nothing wrong with reviews based primarily on an author's subjective opinion. However, such reviews should be appropriately labeled. For example, "My Favorite Air Quality Monitors" rather than "The Best Indoor Air Quality Monitors". The title sets reader expectations for objective evaluation with consistent methodology.
On the defective display:
Important clarification: we did not ship a broken device. The display issue developed during the review period—this wasn't a QC failure on our part. Hardware can fail during use (as it can with any electronic device), which is exactly why we immediately offered replacement parts, a new unit, and detailed repair instructions when we learned about it.
On the tiny display and lessons learned:
We're well aware that opinions on our display vary significantly, as evidenced by this discussion. Some users love it, others find it too small. We actually have differing opinions within the AirGradient team as well.
We're planning a refresh of our indoor monitor next year and are currently testing around 10 different display types—including e-ink, colour OLED, touchscreens, and others. So far, we haven't found the ideal replacement, but we're planning to involve our community later this year to gather feedback on the various options.
> For example, "My Favorite Air Quality Monitors" rather than "The Best Indoor Air Quality Monitors". The title sets reader expectations for objective evaluation with consistent methodology.
Unfortunately, that ship has sailed. There have now been so many review articles for so very long titled "Best X" when the nature of the review is "... in the subjective opinion of the review author" that it is unlikely anyone views a "best X" article as having any objective evaluation or rigor behind it at all.
Your suggestion would be nice to enforce, but there's no way to get that ship back to port to change its course now.
Super smart move. I hadn't heard of you folks before, but I'm interested in your product - open source and repairability are high on my list for home monitors. I'm lying in bed awake right now sir to an air quality issue, so it's top of mind.
The only thing you're missing for me is radon detection. I just bought a house and tests came in below remediation levels, but the report showed a lot of spikes and variance. So you have any plans for a model with radon detection in the future?
I think your concerns are legit, but it's not necessarily the reviewer's fault that they're on three deadlines and don't have the time to give your product the care and concern it deserves. It's probably the editor's or the publisher's.
I'm a world class writer but I stopped doing it for a living a long time ago. Why? Because as media moved from print to online, the work was devalued. I've worked for 25 cents a word sometimes, which was pretty decent when one 1200 word piece could pay rent back then. Nowadays, writers are offered $25 per article flat with no compensation for rewrites. Staff positions pay badly for too much work but are as coveted as C suite gigs are in the tech world. Maybe more so.
So if the reviewer is staff, they might be assigned three or four reviews in a given week on top of other work. If they're freelance, they might have to take on more just to make their rent. This is because your average magazine staffer who's not management pulls about as much as a Starbucks manager, and was ever thus, unless you got in at Vanity Fair or The Atlantic back in the Before Times.
It's like when I was reviewing albums for $50 a pop: I'd get a stack of them to review and cue up track one and if I didn't get hooked pretty quick, I'd just pop in the next one.
Your device arrived damaged, which is absolutely no one's fault, but your reviewer doesn't have time or honestly impetus to give it a second chance. Not for whatever they're getting paid for that review, which is not much at all.
It's just bad luck, is all. And yes, it's not fair and, yes, you're right to complain, but it's not as simple as "tech writer lazy".
(And if anyone's response is "They accepted the job, they should do their best at it no matter how little it pays", I'm guessing you've never had to duck your landlord to try not to get evicted before the freelance check you've been hunting up for three weeks arrives. There's a reason I'd rather make a living as a mediocre coder than a very good writer these days - at its worst, the tech industry is more renumerative and stable than the publishing industry is.)
> I think your concerns are legit, but it's not necessarily the reviewer's fault that they're on three deadlines and don't have the time to give your product the care and concern it deserves. It's probably the editor's or the publisher's.
You wrote it, you don't get to dodge the responsibility like that. Professional integrity still matters.
> It's like when I was reviewing albums for $50 a pop: I'd get a stack of them to review and cue up track one and if I didn't get hooked pretty quick, I'd just pop in the next one.
That seems unethical.
> There's a reason I'd rather make a living as a mediocre coder than a very good writer these days
As coders, we also have an ethical responsibility to pushback against code that will harm people.
We need more writers to say no to writing stories with insufficient time/resources to do it ethically same as we need more developers who push back against building unethical products.
"is that this review is ... pretty much purely based on the personal preferences of the author."
You've found the core takeaway about nearly all "product reviews" in nearly all publications. They are almost all simply "the personal preferences of the author".
These authors have neither the time, nor the science skills, for anything even beginning to look like a rigorous scientific review, and so the "best" vs. "ok" vs. "not recommended" tags applied result because the author liked the particular shade of pink used on a trim piece on one, or liked that another one looks like the Apple computer they are using, and so forth.
But they are never based upon any objective criteria, and are never (nor ever were intended to be) reproducible in any scientific fashion.
Yet, as you say, they have "great power" to influence buying decisions on the part of folks who read their reviews.
> But they are never based upon any objective criteria, and are never (nor ever were intended to be) reproducible in any scientific fashion.
This is also why review aggregators exist: if I'm just getting into a thing, such as watching movies or buying appliances, I probably need a general sense of how people collectively feel about a thing. But if I'm keenly aware of my preferences, it helps me to find reviewers who align with how I think. People routinely seek out specific reviewers or specific publications for this reason.
For instance, someone reading this review might conclude "I really appreciate that ease of use is a topic that's front of mind with this reviewer." Another reviewer's focus might be customizability, and they might recommend AirGradient. And that reviewer's audience follows that person likely because those concerns are front of mind.
...to be honest, if AirGradient had responded more along those lines ("we prioritized X and Y. We understand if this isn't the best fit for customers looking for Z, but we're working on it"), it would've felt more empathetic and less combative to me.
I for one would reach for Consumer Reports well before Wired for recommendations. I might still read the review at Wired for the details and decide for myself what’s important rather than their recommendation.
First, the thing I'm not really seeing mentioned anywhere here in the HN comments is that a separate AirGradient sensor was #3 on the list of "recommended" sensors, and was specifically called "Best Budget Quality Air Monitor". I also can't seem to find this mentioned in the piece that you wrote, either. Why not highlight that success?
You write:
>How can a product be penalized for a failing display when another recommended product has no display?
This is an incredibly perplexing take. A display is subjective - whether or not the customer wants one is up to the customer. What the customer does want is a functional product, so regardless of what another product's features are, if that product functions as intended and yours does not, of course it's going to be recommended over yours.
>How can an indoor monitor without CO2 sensing - essential for understanding indoor air quality - be recommended over one that includes this crucial measurement?
Again - the products without CO2 sensors functioned as intended. It is indeed accurate that CO2 is one of the most critical metrics for assessing indoor air quality, but it goes back to my previous comment - perhaps the customer is more interested in PM2.5 indoors than CO2 for a specific reason. We don't know. Ultimately, the CO2-less sensors functioned as intended, whereas yours did not.
You go on to say:
>And specifically for situations like this: How would you want us to handle it? Should companies stay quiet when review methodology breaks down? Should we be more aggressive in calling this out? Or is transparency and open discussion the right approach?
Maybe focus less on one review and more on improving the product? As another comment states, you shipped a broken product and it suggests that there's a problem with your QA process. Further, you state early on:
>Let me be clear: this was a legitimate hardware failure, and we take full responsibility for it. As soon as we learned about the issue, we immediately sent replacement parts and a new unit, including repair instructions, as repairability is one of our core differentiators.
Let's maybe hear more about that. How/why did the hardware fail? Did you examine your QA process and make any improvements to it? Highlight these steps, as well as the "core differentiator" that is your repairability, rather than asking perplexing questions about why one reviewer didn't like your product.
As an "average Joe" customer in this area, the whole response feels excessive and... whiny (driven by the fact that you don't highlight that you did, in fact, have a product on the list that was well recommended). I don't say that to be terribly mean, it's just a bit off-putting. You're not necessarily wrong about product reviews like this in general, but like... who cares? Put the effort into making a solid product, not taking umbrage with one person's opinion.
There will be more reviews, and some of them will be negative. You're not going to be able to control perception and opinion, and nobody will ever get perfect marks from everyone. Learn to be OK with that.
Edit: I just saw your response about this not being a hardware failure when shipped. Still, the general concept of my point remains - detail what you're doing to determine how this happened and prevent it in the future, rather than griping about the review process. If "transparency is how [you] operate", lemme hear the deets about this issue!
Yes, I did not mention our outdoor monitor being featured because I don't see this as a success when I critique the whole methodology of the review.
As I mentioned above, we are working on a refresh of the indoor monitor. The display is also under discussion but so far with 10s of thousands of the indoor monitor sold, I am only aware of a single digit number of cases of a failed display.
I have your product. I like your product. I like your company. Fundamentally, I can't disagree with the review. You got unlucky, and the reviewer was looking for a different product. It happens.
It's not a universal product. Competitors have upsides and downsides, and different people want different things.
I think this is the time to move on, and focus on people so like and appreciate you, and not dwell on those who don't. Success brings more of both, and if you can't handle a few haters, you probably don't want to be too successful.
And reviews are imperfect, but a lot better than no reviews. Accept life isn't perfect.
Thanks for a great product and for running a company with integrity.
There seems to be a trend of companies attacking reviewers and claiming their feelings got hurt when they dislike the review, Nothing CEO is reacting to reviews on youtube, Rabbit R1, the Humane AI pin, Fisker cars, and not AirGradient i guess.
To anyone else reading, I would recommend, AirGradiant is the only real contender that checks all the boxes for an air monitor. Love the way they spun this narrative.
Wired sold-out to Condé Nast long ago. They're the tired ones.
This sounds like something Louis Rossmann should cover as a counter-example of mfgrs trying to do the right thing but fickle, corporate reviewers behaving in a petty, unfair manner.
The reviewer bought the product and received a broken unit. Why is it unfair to write about their actual experience with the product? Sure, warranty exists but none of the other products being tested needed a warranty cycle.
AirGradient (and several commenters here) feels like they're trying to spin their own QC problems as an indictment of modern journalism.
What have we learned about AirGradient improving their product in response to the review?
That they "immediately sent replacement parts and a new unit, including repair instructions, as repairability is one of our core differentiators".
That's good, I genuinely respect that. But are there going to be improvements in QC protocol? Consideration of a bigger display? Apparently not, or at least there's no mention of this.
Instead they launch into a distracting and unproductive discussion about reviews in general, missing the entire point of the review's critiques and an opportunity to make a better product, or at least to clarify why they don't see a need for better QC, or don't think a bigger display would be a good idea.
> AirGradient (and several commenters here) feels like they're trying to spin their own QC problems as an indictment of modern journalism.
That's my take as well. I have seen several reviews where there were issues with the unit received where the unit was replaced and an update was made to the review. It's not like this is the only situation to have ever happened. A manufacture complaining about someone else complaining about a valid problem with what they received is just petty. Man up, admit there were problems, accept the loss in revenue by sending out replacement units. You screwed up the product during manufacturing/design/wherever, own it. Once you do that, stop whining about people correctly calling out defects in the units even if you did fix it. Until you recall/replace every defective unit (not just the ones with owners making noise about it), you have no standing to be upset someone making valid points about the defects.
This is what you could call a learning opportunity. Instead, they come across as petulant and whiny. Just take your medicine and grow and learn to not make the same mistake on future products
I actually tried to reach out to Louis Rossmann a few times but haven’t got a response (yet).
I think what’s most interesting is that we figured out a business model based on open source hardware that’s sustainable. Thus a win-win for the manufacturer and the customer.
Repairability was actually a feature we design into the product from the start.
My immediate reaction when landing on one of these "top 10 X products" are that the list is sorted according to the size of the kickback the author gets when users click on affiliate links. For the most part I don't believe there is no such thing as a legitimate product review on the internet any more.
There is a vanishingly small collection of youtubers that I might still trust when it comes to product reviews, and that list is shrinking.
Not exactly on topic, but does anyone else feel that the bolded key phrases actually makes it harder to read? I find my eyes jumping between them without absorbing the rest of the text.
I have one and I generally agree that the screen sucks, it could really use an upgrade.
Also it’s a pain in the a to zero out the co2 sensor the first time.
I would probably not recommend it to someone who does not like to dabble a lot with the tech. It’s not really a it just works and it’s easy for everyone product.
I wouldn’t worry too much tbh if I was Airgradient. I don’t think anyone trusts Wired for serious tech reviews and the target audience would veer towards plug and play crowd anyway.
My airgradient monitor has been online for years and sending data to Prometheus reliably. I’ve been able to plot the air quality across a few climate events and the introduction of a Samsung air filter in my bedroom. It’s a good little product.
Rotten luck but at the same time reviewers review the device in front of them. They can’t really throw that experience out on the basis of some assumption that it’s not representative
I own several AirGradient monitors and have used other brands in the past. As far as I am concerned AirGradient is clearly superior, not only for ease of use, repairability and their open source approach, but also because of their tremendous enthusiasm for getting accurate data and being totally transparent about the strengths and weaknesses of the technology.
I have one of these AirGradient indoor units. I also have a dedicated RadonEye Bluetooth device and Airthings wave Pro.
The oled display is nice, but I rarely care in realtime what the exact metrics are. I have that stored as time series stats sso I can see trends over time. Exactly like I do for metrics of production systems in SRE life.
The unit also has a series of LEDs across the top and I can read the actual status from 20’ away (which is as far as I can get without going out a window or around a corner).
One green led? Good.
Two green leds? Meh.
Three LEDs? They’re red now and that’s not great
Single red led in the top left in addition to any on the right? Spectrum is having yet another outage (no internet)
No LEDs? Not powered on
Reviewer was overly severe and did his readers a disservice.
It’s better imho than my Airthings wave pro, and it lets me get awesome, actionable time series data. It’s sensitive enough to show air quality taking a dive overnight with two people and multiple pets breathing in the same room (one green during the day, three or four red at night), and also to show that adding a half dozen spider plants keeps co2 well in check (consistent one green led).
And I can read the air quality from across the room without getting out of bed.
Yeah, gamified air quality on my wife accidentally on installing this. She really wanted to see green on the LEDs. Also drove home the fact that using a gas stove hits the air quality in the house for an hour or two, that's now on the replacement list.
The fact that I can keep using this even if the vendor goes out of business was a major selling point, but also home assistant integration.
I highly recommend these(I have an indoor and outdoor units)
Open source performs better, but is less convenient/accessible than a polished consumer product with inferior technical chops?
Even ignoring the broken display, which I think is a red herring here (it would be relevant if this unit had a pattern of quality issues or failures indicating systematic production issues), I think that's the story here.
I appreciate the response from airgradient, assuming it's all true.
The product with fewer features being recommended makes sense if they do those things well. The customer buying it is aware of the limitation, whereas the customer buying your unit getting a broken display is a disappointment. Maybe it was an unfair review, but this comparison is lame.
Shipping them a product with a broken display implies shoddy manufacturing and crap quality control. Any other customer could end up buying this substandard product too, and being disappointed by the lack of workmanship.
Why not take responsibility for that instead of complaining about an honest review?
The review clearly states that the screen failed after a few months. How exactly do you expect them to QC an issue like that? If the part arrives at your factory in full working order, you don't typically spend much time trying to make it fail unless you discover that there is a latent issue that affects a large enough proportion of those parts where the cost of screening becomes worth it.
Every volume product has failures and a single datapoint is not enough to say anything about a product's quality (good or bad). At the same time, a failure during a review is absolutely something that should be mentioned, as well as given an opportunity to test the company's RMA process. At the very least a failure like that should cause someone to look into how many other's online have similar issues.
> Shipping them a product with a broken display implies shoddy manufacturing and crap quality control.
Eh, what? I've received products that needed warranty repair/replacement from Apple, Toyota, Philips, Nintendo, Ikea, Breville, etc. (All of those examples which provided good service in repairing/replacing the product in question.)
A single data point of a broken product doesn't tell you anything of value.
I would much rather prefer AirGradient over IQAir AirVision Pro, despite WIRED’s recent evaluation favouring the latter. Source: I “own” (see below) an IQAir AirVision Pro, which I bought directly from IQAir’s website, and I have seen an AirGradient unit in someone else’s home.
One reason is, in fact, the screen. AirVision Pro’s display is bright, but it is bright all of the time; it cannot be made dim enough to make it suitable for use in the bedroom, for example: the blueish white LCD is basically a small light fixture. Furthermore, the contents of the screen are well-readable only at a narrow angle (think when you look straight at it; putting it on top of a tall fridge or down on your windowsill makes it illegible). I would much rather prefer an e-ink display.
Second, on their website IQAir states that their air monitors are made in Europe[0]. This is a false claim. In fact, AirVision Pro is made in PRC, as it declared on the box. I would not be against a good product made in PRC, and AirVision Pro is in fact known for good cheracteristics regarding accuracy, but it seems like a dark pattern at best, and they clearly want to mislead customers.
Third, the enclosure featured a charging USB port (which is an obsolete micro USB variety incredibly hard to find cables for) that was very finicky and gave up the ghost 3 months in. The device just wouldn’t charge its battery or see any power at all thereafter, so it basically became a brick of cheap plastic for all intents and purposes. I can’t be bothered to disassemble the enclosure and try to repair it since I can’t stand the bright screen anyway and I already got the hang of air quality patterns where I live.
It did the job, sure, but if AirGradient’s PM2.5 and carbon dioxide detectors do nearly as good of a job[1] it makes it a much more compelling option for me.
Unfortunately, as of the time I last checked, AirGradient shipped to a small set of countries which did not include my area; by comparison, IQAir has a much wider coverage.
[0] You can still see the proud large “Swiss made” in the relevant section of their website (https://www.iqair.com/products/air-quality-monitors). Furthermore, if you Google the question, the LLM-generated answer suggests that
> The IQAir AirVisual Pro air quality monitor is Swiss-designed and manufactured. While IQAir is headquartered in Switzerland, their manufacturing facilities for air purifiers, including the AirVisual Pro, are located in both Switzerland and Southern Germany
I checked out the Wired article. Not recommended seems to mostly be about the fact that it's a tiny display, which some people (like the reporter) will have trouble to read. That the screen degraded didn't help of course. The reporter doesn't want to use a web dashboard to check out the readings on his indoor air monitor. I think that's a fair comment, maybe just a bit harsh to put it in the not recommended bucket. I understand that this can affect sales quite a bit for a small supplier like AirGradient.
It's linked in the article but here it is. https://www.wired.com/gallery/best-indoor-air-quality-monito...
> However, the reviewers logic is difficult to follow when you compare it across products:
> - Our monitor: Downgraded due to a faulty display (a warranty-covered hardware issue).
> - Another Monitor: Recommended, despite having no display at all.
> - Another Monitor: Also recommended, despite lacking a CO2 sensor—one of the most critical metrics for assessing indoor air quality and ventilation.
I'd rather buy no display and get no display than buy yes display and get no display
Buy display, it’s super tiny, and then it breaks after a couple months.
LEDs mean something, you can program them, or not.
A screen I can read from across the room that changes would be nice. Not interested in running a server or going to a webpage or app to see my indoor air quality.
That’s not the dichotomy. You do get a display. The reviewer’s didn't work and instead of waiting for the replacement unit they just published a “not recommended” and told AirGradient “sorry I don’t get paid enough to give your product a fair shot”.
The problem with that "no display" example is that the monitor isn't failing to do a thing they're trying to do, which AirGradient's did in the reviewer's perspective.
It's not a failure that the one without a display doesn't have a display. It's a design choice. The AirGradient unit has a display, but it's tiny and hard to read. Scrolling through the article, all the other units with displays have much larger and more readable text. You can read the biggest data points from across a room. The AirGradient has a display, but it fails to be a good display, hence the reviewer's perspective - it's not living up to its goals.
This feels best summarized as:
• Product A has limited features but does them well. If the customer is okay with the features the product has, the reviewer can recommend it for this customer.
• Product B has more features but is impacted by QA issues as well as product design decisions that make those features harder to use. This impacts the ability of the customer to use features they might've paid for to use the product, and it may even impact their ability to use features core to other products. This potentially makes Product B less desirable for comparable use cases.
With this in mind, I'm inclined to agree with Wired's decision.
I don't agree with this kind of thinking. You have to determine what exactly what you want, anything else is just "nice to have". If you want an air monitor and don't care about screen, having screen is just a "nice to have" and should not affect your experience. And if you do care about the screen, you should also remove all no-screen air monitors from the list.
The screen is the “how,” not the “what.” An item is designed to function with the parts it has. A unit designed without a screen is doing the job another way—say, with a row of LEDs or something. I care whether I can find out what the air quality is, I don’t care whether or not I do that using a screen. It seems to me like relying on a finicky screen was a poor design decision.
It also raises my eyebrows that they see “repairability [as] one of our core differentiators.” It’s cool to make that possible quietly for people who are into it, but would you want a “repairable” smoke detector? Or one that just works? If it broke, would you want them to send you one that’s not broken, or parts and a booklet of repair instructions?
Do that enough and then the category becomes irrelevant. Every product is a unique snowflake owing to some perfect combination of features (has screen, 30cm3 volume, 19.3 db loud, etc.
If I pay for X, I will be mad if I can only use X-1.
I have one of these. The unit has bright LEDs that display the levels at a glance. Key point being that you can actually program the LEDs to report the parameter you're most interested in as well which is great. The LEDs are part of the "display" as well.
I have an air gradient monitor.
There are three outputs. LEDs that go from green to yellow or red. The small display. A webpage dashboard. Or you can plug the data into HA for whatever you want.
The only issue I have with the display is that it’s monochrome and that prevents making data easy to read the trends, by showing positive changes as green or negative ones as red.
If the display is too small the LEDs are easily visible for quick information and then the dashboard is for more data.
Reviewers often have their issues really understanding how people use products, often because rapidly changing things to review, doesn’t allow them the time to truly use and understand a product.
> Reviewers often have their issues really understanding how people use products, often because rapidly changing things to review, doesn’t allow them the time to truly use and understand a product.
The reviewer states:
so it's not like they're new to the field. They know what they want out of the product they're reviewing. That may not be what someone reading the review may be after, but that doesn't invalidate the review.I'm not sure this contradicts GP's point - this reinforces that the author is comfortable using a different product (AirVisual Pro) and may therefore, almost paradoxically, struggle more with a product that displays data differently than someone who has never used either product.
To draw a parallel, I think an iPhone user may have a harder time using Android than someone who has never used either phone.
Admittedly, I'm another happy AirGradient user.
Yes, time to adjust and familiarize oneself with a new product is often too short. Let alone adopting new sensors into a smart home ecosystem and fully setting everything up the way you want, rather than just the default settings, etc.
I get a new laptop and phone and generally dislike it, because it's not what I am familiar with and it's not yet setup just the way I like it.
And then there is the fact, that the reviewers favoured product has a logo on the product page for the reviewers publication.
There is certainly potential for financial interests to impact reviews.
That, and many reviewers are given the "here's 20 products, I need an article hitting these buzzwords in X Weeks" task.
You can't just do that and get in quality testing time with more than one or two products.
Reviewing things fairly and helpfully is hard and takes time, and especially as AI slop takes over writing (thankfully it looks like this article at least has a byline), I think it's going to be harder and harder to find actual useful human reviews to guide decisions.
Come on, these are air quality measuring devices. You set them all up, set their panels next to each other for the duration of the testing, and then you evaluate their performances. It doesn't take much time. You look at them when you take readings at whatever interval to compare accuracy. You glance at them to see readability. It doesn't require a lot of effort.
This is quite different from being tasked with comparing bicycles which would require a lot of effort to give equal time to each one. Unless the journo was a world class rider, I'd be shocked if they rode any one of them for more than 5 minutes.
Applied inversely to bikes: Come on, these are bikes. You get on the bike, you pedal around, it doesn't take much time.
These devices usually have between 3-8 sensors inside (with wildly varying quality and quality control), run firmware that _usually_ has access to your WiFi or requires an app to run on your phone (security implications), and are meant to exist in your home for years at a time.
Good reviews which consider all those aspects take time and effort, even for simpler devices.
If you are actually in good faith comparing the zero energy collecting of values from a set of monitors to the physical exertion of riding a bicycle as the same thing, then I just cannot have a conversation with someone that is deliberately being that obtuse.
Why do you believe that collecting data, collating it into useful information and making conclusions from that information is "zero energy"? Yes, testing bicycles will require more calories and exertion, but that doesn't mean that testing computers, sensors, or other technological devices is a zero energy effort.
This seems like a big assumption to make since that isn't the reasoning presented in the review.
> I understand why I need to check a dashboard for an outdoor air quality monitor, but having to check a dashboard for an indoor monitor seemed like an extra unnecessary step.
This is after already mentioning the unit also has led light bars to display quality without reading the number.
The reviewer seems to be saying that just lights and a web dashboard isn't enough for and indoor monitor.
Yet earlier in the article the Author picked the "Touch Indoor" as the unit with the best display, even though that is an indoor unit with no screen and only led lights.
Given that, you'd think the AirGradient unit's lights would be compared to say why they are worse, but that doesn't happen.
Having read the wired reviews, they set off my internal alarms for "low quality reviewer" who doesn't display a deep understanding of the products being reviewed or the market segment. There's a lot of fluff and stuff about screen size and very little digging into which actual accuracy and functionality.
That said, I haven't seen any good reviews from wired in a log time.
The logic is more like “this is the only company that shipped us a broken review unit.”
The transparency of this company is nice but you can’t control what other people think about your products based on their experiences. It isn’t really “unfair” at all, at least not 100% unfair. They are essentially upset that the press is allowed to have an independent opinion.
If I buy a laptop and the screen is broken (warranty issue!) that’s still a lot worse of an experience than a desktop PC that has everything working. The excuse that a desktop PC doesn’t include a screen isn’t relevant, the idea is that the competition shipped a fully functioning product.
I guess you didn't tead the review, because that isn't what happened
> After using the monitor for a few months, the display began to fall apart into unreadable characters.
Looks like the AirGradient went up against a similar device with a huge colour screen that's easy to read. No wonder they preferred that, although I'm not sure if the $100 price difference is worth it for other people.
The AirGradient screen isn't even that small, but the UI can be much more user-friendly IMO. There's a reason all the other meters with screens do HUGE NUMBER+tiny label.
I'm sure many people will prefer the AirGradient, but I don't think the reviewer is wrong for having different preferences.
But it also had an LED light to show the quality at a glance.
I don't think any of this review is unfair. Shipping a broken product and then covering the fix with your warranty is better than shipping a broken product and telling the consumer to get bent, but worse than shipping a working product. A product that advertises fewer features and delivers them is better than a product that advertises more features and doesn't deliver them. Even in the case of the CO2 sensor, it may be your opinion that a CO2 sensor is critical and can't be done without, but the review is for the feature set at the price point. A device that does fewer things can be a better device depending on price point and, again, reliability.
If, however, I concede the author's idea that reviews must have objective criteria, methodology and standards in order to be taken seriously then I'd like to propose the first objective criterion: broken out-of-the-box === not recommended.
edit: evidently the device failed after a few months. This doesn't change my final opinion, which is in total agreement with the review, but it deserves to be mentioned because I was incorrect in my facts. For my fellow JS devs, I'm standing by broken out-of-the-box === not recommended, and adding broken within a few months of installing == not recommended.
I did not pick up that the device was broken when it was sent out from the article. It failed "After using the monitor for a few months".
That is still not good, devices tend to follow the bathtub curve, so if it took only a few months for it to fail, they either got really unlucky OR the device is highly unreliable.
I don't have hard data, but I own 3 air gradients and they've been going strong for years. I don't think it's unreasonable to think they just got unlucky
>they just got unlucky
then let me ask you: as an ethical person trying to write a review, how do you handle that situation? It seems like that's an angle to this that we're not exploring, and that's whether the review is epistemologically justified rather than whether it's objectively correct. The way I see it, as a reviewer I get a product that fails well sooner than I expected it to and I have three choices:
1) Don't report the failure
2) Report the failure
3) Report the failure but try to contextualize it (basically, trying to solve for sure whether they got unlucky or not)
1 is obviously unethical, and 3 seems like it's well outside the scope of a reviewer's duty (and could be seen as carrying water for a particular brand. after all, do you think the person who wrote the OP would be okay with it if his product's failure was considered typical but another product's failure was determined to be atypical, regardless of the truth?). The only ethical approach is to report what happened, and not speculate as to cause.
Your 2nd option is what I would like to see. And I do agree that it is not perfect, but as you pointed out, it is the least bad option.
Do what Linus Tech Tips has done (when they're doing it right): report that they got the device, tested it for a while, and it broke, so what they currently know about it is not the full picture, and that they might revisit the product when they have confirmed that they've worked out the kinks with the device in question.
Easy
That feels like exactly what the article did. They disclosed that the thing broke and on what time frame, alongside details about the degradation of functionality. As far as say that this isn't the full picture, is it definitely not? If we owe someone the benefit of the doubt if their device breaks while being reviewed do we owe the reader the deficit of the doubt if it doesn't break? What if they just got lucky?
if your sample size is 1 of each product, your review is likely going to be unfair to someone. Especially as you test more and more models.
This. And what if the reviewer or someone else smashed the box unreasonably?
Achim from AirGradient here. Good to see that my post has been submitted here. Happy to answer any questions you might have.
I spend quite a long time writing this post and it actually helped me to see the bigger picture. How much can we actually trust tech reviews?
I am already getting very interesting results from the survey I posted and already planning to write a follow up post.
Ultimately, you shipped a broken product.
That points to a lack of QA on your part and, I think, it is fair for a reviewer to point out.
Even if you have an exemplary warranty process and easy instructions, that's still a hassle. Not everyone has the confidence or the time to repair simple things.
As for the objective/subjective nature of reviews. Are your customers buying air monitors for their 100% precision or for "entertainment" purposes / lifestyle factors?
I have a cheap Awair air monitor. I have no idea if it is accurate - but it charges by USB-C and has an inconspicuous display. That's what I wanted it for.
It is perfectly fair for a reviewer to point out their personal preferences on something like this. They aren't a government testing lab.
If you sell a physical thing, some percentage of them will have defects. That's just a fact of manufacturing.
It seems unfair to move to "not recommended" due to a single instance of a hardware failure, especially if the manufacturer made it right. And repair-ability is one of their core values!
At most this should've triggered a "this happened to me, keep an eye out if this seems to be a thing." note in the review instead of moving to not recommended.
If you got food poisoning from a restaurant, would you recommend it to your friends? After all, food-borne pathogens and poor hygiene are just a fact of life.
How about if they gave you a voucher for a free drink to say sorry?
Reviewing products is like interviewing people. You have to go by what you see on the day. Your can't review (or interview) based in what could have happened; only on what did.
Yes, if it happened once. If I get food poisoning every time, probably not. Perfection is impossible, I am reasonable and mindful of the challenges of consistency.
Hardware device arrives damaged or non functional? I’m just going to call and ask for another one. If it’s a critical need (I cannot wait for a return and delivery cycle), I’m buying more than one upfront. Spares go in inventory.
> If you got food poisoning from a restaurant, would you recommend it to your friends? After all, food-borne pathogens and poor hygiene are just a fact of life.
There are standard practices that avoid the vast majority of food poisoning. Poor hygiene is not a fact of life, it's a failure of process in a restaurant.
There are no known standard practices to avoid all faulty electronics at anything like a reasonable price. From the sounds of it, this unit worked initially but failed over time. That's what warranties are for, this is why they exist. As a society we've decided that it's kind of okay if _some_ products fail early, as long as the companies make it right when they do. Which it doesn't sound like the company had any lack of intention in doing that here.
There is no corresponding societal understanding for your analogy.
> Ultimately, you shipped a broken product.
They shipped a working product that failed after several months, was covered under warranty and also repairavle at home (which the review doesn't even mention.)
> It is perfectly fair for a reviewer to point out their personal preferences on something like this.
But that isn't what happened. The product was included as "not recommended" not just "this wasn't my favorite due to X but would be good for this type of person".
They make up a category to list every other unit as "best for" but decided that nobody should want to buy this one because the Author got annoyed.
The review is poorly written and doesn't do a good job of actually comparing units. It is the kind of review article that mostly fluff with very few details about actual differences revealed during testing that I have learned to ignore when looking for information about what to buy.
A few thoughts from me on the discussion so far, which I find incredibly insightful. Many thanks to everyone sharing their perspectives. I truly appreciate it.
On subjective reviews: I think there's absolutely nothing wrong with reviews based primarily on an author's subjective opinion. However, such reviews should be appropriately labeled. For example, "My Favorite Air Quality Monitors" rather than "The Best Indoor Air Quality Monitors". The title sets reader expectations for objective evaluation with consistent methodology.
On the defective display: Important clarification: we did not ship a broken device. The display issue developed during the review period—this wasn't a QC failure on our part. Hardware can fail during use (as it can with any electronic device), which is exactly why we immediately offered replacement parts, a new unit, and detailed repair instructions when we learned about it.
On the tiny display and lessons learned: We're well aware that opinions on our display vary significantly, as evidenced by this discussion. Some users love it, others find it too small. We actually have differing opinions within the AirGradient team as well. We're planning a refresh of our indoor monitor next year and are currently testing around 10 different display types—including e-ink, colour OLED, touchscreens, and others. So far, we haven't found the ideal replacement, but we're planning to involve our community later this year to gather feedback on the various options.
> For example, "My Favorite Air Quality Monitors" rather than "The Best Indoor Air Quality Monitors". The title sets reader expectations for objective evaluation with consistent methodology.
Unfortunately, that ship has sailed. There have now been so many review articles for so very long titled "Best X" when the nature of the review is "... in the subjective opinion of the review author" that it is unlikely anyone views a "best X" article as having any objective evaluation or rigor behind it at all.
Your suggestion would be nice to enforce, but there's no way to get that ship back to port to change its course now.
eInk.
I hate blinking lights in a bedroom.
Super smart move. I hadn't heard of you folks before, but I'm interested in your product - open source and repairability are high on my list for home monitors. I'm lying in bed awake right now sir to an air quality issue, so it's top of mind.
The only thing you're missing for me is radon detection. I just bought a house and tests came in below remediation levels, but the report showed a lot of spikes and variance. So you have any plans for a model with radon detection in the future?
I think your concerns are legit, but it's not necessarily the reviewer's fault that they're on three deadlines and don't have the time to give your product the care and concern it deserves. It's probably the editor's or the publisher's.
I'm a world class writer but I stopped doing it for a living a long time ago. Why? Because as media moved from print to online, the work was devalued. I've worked for 25 cents a word sometimes, which was pretty decent when one 1200 word piece could pay rent back then. Nowadays, writers are offered $25 per article flat with no compensation for rewrites. Staff positions pay badly for too much work but are as coveted as C suite gigs are in the tech world. Maybe more so.
So if the reviewer is staff, they might be assigned three or four reviews in a given week on top of other work. If they're freelance, they might have to take on more just to make their rent. This is because your average magazine staffer who's not management pulls about as much as a Starbucks manager, and was ever thus, unless you got in at Vanity Fair or The Atlantic back in the Before Times.
It's like when I was reviewing albums for $50 a pop: I'd get a stack of them to review and cue up track one and if I didn't get hooked pretty quick, I'd just pop in the next one.
Your device arrived damaged, which is absolutely no one's fault, but your reviewer doesn't have time or honestly impetus to give it a second chance. Not for whatever they're getting paid for that review, which is not much at all.
It's just bad luck, is all. And yes, it's not fair and, yes, you're right to complain, but it's not as simple as "tech writer lazy".
(And if anyone's response is "They accepted the job, they should do their best at it no matter how little it pays", I'm guessing you've never had to duck your landlord to try not to get evicted before the freelance check you've been hunting up for three weeks arrives. There's a reason I'd rather make a living as a mediocre coder than a very good writer these days - at its worst, the tech industry is more renumerative and stable than the publishing industry is.)
> I think your concerns are legit, but it's not necessarily the reviewer's fault that they're on three deadlines and don't have the time to give your product the care and concern it deserves. It's probably the editor's or the publisher's.
You wrote it, you don't get to dodge the responsibility like that. Professional integrity still matters.
> It's like when I was reviewing albums for $50 a pop: I'd get a stack of them to review and cue up track one and if I didn't get hooked pretty quick, I'd just pop in the next one.
That seems unethical.
> There's a reason I'd rather make a living as a mediocre coder than a very good writer these days
As coders, we also have an ethical responsibility to pushback against code that will harm people.
We need more writers to say no to writing stories with insufficient time/resources to do it ethically same as we need more developers who push back against building unethical products.
> I'm a world class writer
This is awkward, but I think you mean ”I'm a world-class writer”.
Be kind. Don't be snarky.
One of my least liked HN rules. I'm from a culture where good- natured ribbing (snark) is not considered unkind.
I rather enjoy snark, whether by me, at me, or just reading.
Snark is well tolerated here, when it's not boring.
From your "pop out" in the article:
"is that this review is ... pretty much purely based on the personal preferences of the author."
You've found the core takeaway about nearly all "product reviews" in nearly all publications. They are almost all simply "the personal preferences of the author".
These authors have neither the time, nor the science skills, for anything even beginning to look like a rigorous scientific review, and so the "best" vs. "ok" vs. "not recommended" tags applied result because the author liked the particular shade of pink used on a trim piece on one, or liked that another one looks like the Apple computer they are using, and so forth.
But they are never based upon any objective criteria, and are never (nor ever were intended to be) reproducible in any scientific fashion.
Yet, as you say, they have "great power" to influence buying decisions on the part of folks who read their reviews.
> But they are never based upon any objective criteria, and are never (nor ever were intended to be) reproducible in any scientific fashion.
This is also why review aggregators exist: if I'm just getting into a thing, such as watching movies or buying appliances, I probably need a general sense of how people collectively feel about a thing. But if I'm keenly aware of my preferences, it helps me to find reviewers who align with how I think. People routinely seek out specific reviewers or specific publications for this reason.
For instance, someone reading this review might conclude "I really appreciate that ease of use is a topic that's front of mind with this reviewer." Another reviewer's focus might be customizability, and they might recommend AirGradient. And that reviewer's audience follows that person likely because those concerns are front of mind.
...to be honest, if AirGradient had responded more along those lines ("we prioritized X and Y. We understand if this isn't the best fit for customers looking for Z, but we're working on it"), it would've felt more empathetic and less combative to me.
> Another Monitor: Recommended, despite having no display at all
Isn't this an outdoor one. Outdoor ones aren't expected to have a display because you want to check them without going outdoors.
This seems reasonable.
No, this is actually the purple air indoor monitor which has no display.
Is Wired really the first place people think of for this kind of product review?
Which sites and publications would you recommend, and which have are biased for financial reasons?
I for one would reach for Consumer Reports well before Wired for recommendations. I might still read the review at Wired for the details and decide for myself what’s important rather than their recommendation.
Will your device measure hydrogen sulfide levels?
First, the thing I'm not really seeing mentioned anywhere here in the HN comments is that a separate AirGradient sensor was #3 on the list of "recommended" sensors, and was specifically called "Best Budget Quality Air Monitor". I also can't seem to find this mentioned in the piece that you wrote, either. Why not highlight that success?
You write:
>How can a product be penalized for a failing display when another recommended product has no display?
This is an incredibly perplexing take. A display is subjective - whether or not the customer wants one is up to the customer. What the customer does want is a functional product, so regardless of what another product's features are, if that product functions as intended and yours does not, of course it's going to be recommended over yours.
>How can an indoor monitor without CO2 sensing - essential for understanding indoor air quality - be recommended over one that includes this crucial measurement?
Again - the products without CO2 sensors functioned as intended. It is indeed accurate that CO2 is one of the most critical metrics for assessing indoor air quality, but it goes back to my previous comment - perhaps the customer is more interested in PM2.5 indoors than CO2 for a specific reason. We don't know. Ultimately, the CO2-less sensors functioned as intended, whereas yours did not.
You go on to say:
>And specifically for situations like this: How would you want us to handle it? Should companies stay quiet when review methodology breaks down? Should we be more aggressive in calling this out? Or is transparency and open discussion the right approach?
Maybe focus less on one review and more on improving the product? As another comment states, you shipped a broken product and it suggests that there's a problem with your QA process. Further, you state early on:
>Let me be clear: this was a legitimate hardware failure, and we take full responsibility for it. As soon as we learned about the issue, we immediately sent replacement parts and a new unit, including repair instructions, as repairability is one of our core differentiators.
Let's maybe hear more about that. How/why did the hardware fail? Did you examine your QA process and make any improvements to it? Highlight these steps, as well as the "core differentiator" that is your repairability, rather than asking perplexing questions about why one reviewer didn't like your product.
As an "average Joe" customer in this area, the whole response feels excessive and... whiny (driven by the fact that you don't highlight that you did, in fact, have a product on the list that was well recommended). I don't say that to be terribly mean, it's just a bit off-putting. You're not necessarily wrong about product reviews like this in general, but like... who cares? Put the effort into making a solid product, not taking umbrage with one person's opinion.
There will be more reviews, and some of them will be negative. You're not going to be able to control perception and opinion, and nobody will ever get perfect marks from everyone. Learn to be OK with that.
Edit: I just saw your response about this not being a hardware failure when shipped. Still, the general concept of my point remains - detail what you're doing to determine how this happened and prevent it in the future, rather than griping about the review process. If "transparency is how [you] operate", lemme hear the deets about this issue!
Yes, I did not mention our outdoor monitor being featured because I don't see this as a success when I critique the whole methodology of the review.
As I mentioned above, we are working on a refresh of the indoor monitor. The display is also under discussion but so far with 10s of thousands of the indoor monitor sold, I am only aware of a single digit number of cases of a failed display.
I have your product. I like your product. I like your company. Fundamentally, I can't disagree with the review. You got unlucky, and the reviewer was looking for a different product. It happens.
It's not a universal product. Competitors have upsides and downsides, and different people want different things.
I think this is the time to move on, and focus on people so like and appreciate you, and not dwell on those who don't. Success brings more of both, and if you can't handle a few haters, you probably don't want to be too successful.
And reviews are imperfect, but a lot better than no reviews. Accept life isn't perfect.
Thanks for a great product and for running a company with integrity.
The fact you didn’t include a tired / wired meme feels like a missed opportunity
There seems to be a trend of companies attacking reviewers and claiming their feelings got hurt when they dislike the review, Nothing CEO is reacting to reviews on youtube, Rabbit R1, the Humane AI pin, Fisker cars, and not AirGradient i guess.
To anyone else reading, I would recommend, AirGradiant is the only real contender that checks all the boxes for an air monitor. Love the way they spun this narrative.
I agree. I love my AirGradient monitor. I’ve had it for years and it still works wonderfully.
What about Apollo Air? I have one of each, and they both check all the boxes.
I don't even have a preference. They both seem nearly ideal, depending on context.
Have both the pro and the outdoor, both have been great.
Wired sold-out to Condé Nast long ago. They're the tired ones.
This sounds like something Louis Rossmann should cover as a counter-example of mfgrs trying to do the right thing but fickle, corporate reviewers behaving in a petty, unfair manner.
The reviewer bought the product and received a broken unit. Why is it unfair to write about their actual experience with the product? Sure, warranty exists but none of the other products being tested needed a warranty cycle.
AirGradient (and several commenters here) feels like they're trying to spin their own QC problems as an indictment of modern journalism.
What have we learned about AirGradient improving their product in response to the review?
That they "immediately sent replacement parts and a new unit, including repair instructions, as repairability is one of our core differentiators".
That's good, I genuinely respect that. But are there going to be improvements in QC protocol? Consideration of a bigger display? Apparently not, or at least there's no mention of this.
Instead they launch into a distracting and unproductive discussion about reviews in general, missing the entire point of the review's critiques and an opportunity to make a better product, or at least to clarify why they don't see a need for better QC, or don't think a bigger display would be a good idea.
> AirGradient (and several commenters here) feels like they're trying to spin their own QC problems as an indictment of modern journalism.
That's my take as well. I have seen several reviews where there were issues with the unit received where the unit was replaced and an update was made to the review. It's not like this is the only situation to have ever happened. A manufacture complaining about someone else complaining about a valid problem with what they received is just petty. Man up, admit there were problems, accept the loss in revenue by sending out replacement units. You screwed up the product during manufacturing/design/wherever, own it. Once you do that, stop whining about people correctly calling out defects in the units even if you did fix it. Until you recall/replace every defective unit (not just the ones with owners making noise about it), you have no standing to be upset someone making valid points about the defects.
This is what you could call a learning opportunity. Instead, they come across as petulant and whiny. Just take your medicine and grow and learn to not make the same mistake on future products
Agree.
I actually tried to reach out to Louis Rossmann a few times but haven’t got a response (yet).
I think what’s most interesting is that we figured out a business model based on open source hardware that’s sustainable. Thus a win-win for the manufacturer and the customer.
Repairability was actually a feature we design into the product from the start.
Oh you totally need to send him a diy kit to assemble for a video!
I think people are missing the fact that Wired has been about “vibes” since the beginning.
Wired vs. tired is literally about what’s “cool.” That’s it. It has never been rigorous about anything.
Yeah, and video games were just a way to distract kids for a few hours so parents could watch tv or read a book without being bothered.
It's become something else, Wired has a brand name and a reputation, so when they pooh-pooh something that has more weight that if you or I do.
My immediate reaction when landing on one of these "top 10 X products" are that the list is sorted according to the size of the kickback the author gets when users click on affiliate links. For the most part I don't believe there is no such thing as a legitimate product review on the internet any more.
There is a vanishingly small collection of youtubers that I might still trust when it comes to product reviews, and that list is shrinking.
Not exactly on topic, but does anyone else feel that the bolded key phrases actually makes it harder to read? I find my eyes jumping between them without absorbing the rest of the text.
It's marketing, they need you to remember their key phrases more than they need you to read their full rebuttal.
They don't seem to be as interested in the fact their outdoor monitor was the recommended outdoor solution either.
> I find my eyes jumping between them without absorbing the rest of the text.
That's the idea. The caveats they don't want to you to remember are left unbolded.
I have one and I generally agree that the screen sucks, it could really use an upgrade.
Also it’s a pain in the a to zero out the co2 sensor the first time.
I would probably not recommend it to someone who does not like to dabble a lot with the tech. It’s not really a it just works and it’s easy for everyone product.
This is an immature response which comes across as "Indoor Air Quality Monitor" populism.
A more professional response would just stick to the facts rather than trash the reviewer and pontificate on what is wrong with journalism today.
I wouldn’t worry too much tbh if I was Airgradient. I don’t think anyone trusts Wired for serious tech reviews and the target audience would veer towards plug and play crowd anyway.
My airgradient monitor has been online for years and sending data to Prometheus reliably. I’ve been able to plot the air quality across a few climate events and the introduction of a Samsung air filter in my bedroom. It’s a good little product.
It's seems the disconnect is that the reviewer mostly reviewed on a vibe check while the product is excellent if doing a science check.
Given the AirGradient is $100 cheaper than the winning product, I think the review might have been a little harsh.
Rotten luck but at the same time reviewers review the device in front of them. They can’t really throw that experience out on the basis of some assumption that it’s not representative
I own several AirGradient monitors and have used other brands in the past. As far as I am concerned AirGradient is clearly superior, not only for ease of use, repairability and their open source approach, but also because of their tremendous enthusiasm for getting accurate data and being totally transparent about the strengths and weaknesses of the technology.
I have one of these AirGradient indoor units. I also have a dedicated RadonEye Bluetooth device and Airthings wave Pro.
The oled display is nice, but I rarely care in realtime what the exact metrics are. I have that stored as time series stats sso I can see trends over time. Exactly like I do for metrics of production systems in SRE life.
The unit also has a series of LEDs across the top and I can read the actual status from 20’ away (which is as far as I can get without going out a window or around a corner).
One green led? Good. Two green leds? Meh. Three LEDs? They’re red now and that’s not great
Single red led in the top left in addition to any on the right? Spectrum is having yet another outage (no internet)
No LEDs? Not powered on
Reviewer was overly severe and did his readers a disservice.
It’s better imho than my Airthings wave pro, and it lets me get awesome, actionable time series data. It’s sensitive enough to show air quality taking a dive overnight with two people and multiple pets breathing in the same room (one green during the day, three or four red at night), and also to show that adding a half dozen spider plants keeps co2 well in check (consistent one green led).
And I can read the air quality from across the room without getting out of bed.
Yeah, gamified air quality on my wife accidentally on installing this. She really wanted to see green on the LEDs. Also drove home the fact that using a gas stove hits the air quality in the house for an hour or two, that's now on the replacement list.
The fact that I can keep using this even if the vendor goes out of business was a major selling point, but also home assistant integration.
I highly recommend these(I have an indoor and outdoor units)
Open source performs better, but is less convenient/accessible than a polished consumer product with inferior technical chops?
Even ignoring the broken display, which I think is a red herring here (it would be relevant if this unit had a pattern of quality issues or failures indicating systematic production issues), I think that's the story here.
I appreciate the response from airgradient, assuming it's all true.
The product with fewer features being recommended makes sense if they do those things well. The customer buying it is aware of the limitation, whereas the customer buying your unit getting a broken display is a disappointment. Maybe it was an unfair review, but this comparison is lame.
I pretty much assume that all product review sites are at worst crooked and at best biased.
Is there a "meta review" site like metacritic, but for products?
If there is, expect it to be crooked too. Online reviews have been astroturfed to death. It's over.
Shipping them a product with a broken display implies shoddy manufacturing and crap quality control. Any other customer could end up buying this substandard product too, and being disappointed by the lack of workmanship.
Why not take responsibility for that instead of complaining about an honest review?
The review clearly states that the screen failed after a few months. How exactly do you expect them to QC an issue like that? If the part arrives at your factory in full working order, you don't typically spend much time trying to make it fail unless you discover that there is a latent issue that affects a large enough proportion of those parts where the cost of screening becomes worth it.
Every volume product has failures and a single datapoint is not enough to say anything about a product's quality (good or bad). At the same time, a failure during a review is absolutely something that should be mentioned, as well as given an opportunity to test the company's RMA process. At the very least a failure like that should cause someone to look into how many other's online have similar issues.
> Shipping them a product with a broken display implies shoddy manufacturing and crap quality control.
Eh, what? I've received products that needed warranty repair/replacement from Apple, Toyota, Philips, Nintendo, Ikea, Breville, etc. (All of those examples which provided good service in repairing/replacing the product in question.)
A single data point of a broken product doesn't tell you anything of value.
Customers could care less about accuracy if the unit is hard to use. Ie UX, features and price > *.
How much less?
I would much rather prefer AirGradient over IQAir AirVision Pro, despite WIRED’s recent evaluation favouring the latter. Source: I “own” (see below) an IQAir AirVision Pro, which I bought directly from IQAir’s website, and I have seen an AirGradient unit in someone else’s home.
One reason is, in fact, the screen. AirVision Pro’s display is bright, but it is bright all of the time; it cannot be made dim enough to make it suitable for use in the bedroom, for example: the blueish white LCD is basically a small light fixture. Furthermore, the contents of the screen are well-readable only at a narrow angle (think when you look straight at it; putting it on top of a tall fridge or down on your windowsill makes it illegible). I would much rather prefer an e-ink display.
Second, on their website IQAir states that their air monitors are made in Europe[0]. This is a false claim. In fact, AirVision Pro is made in PRC, as it declared on the box. I would not be against a good product made in PRC, and AirVision Pro is in fact known for good cheracteristics regarding accuracy, but it seems like a dark pattern at best, and they clearly want to mislead customers.
Third, the enclosure featured a charging USB port (which is an obsolete micro USB variety incredibly hard to find cables for) that was very finicky and gave up the ghost 3 months in. The device just wouldn’t charge its battery or see any power at all thereafter, so it basically became a brick of cheap plastic for all intents and purposes. I can’t be bothered to disassemble the enclosure and try to repair it since I can’t stand the bright screen anyway and I already got the hang of air quality patterns where I live.
It did the job, sure, but if AirGradient’s PM2.5 and carbon dioxide detectors do nearly as good of a job[1] it makes it a much more compelling option for me.
Unfortunately, as of the time I last checked, AirGradient shipped to a small set of countries which did not include my area; by comparison, IQAir has a much wider coverage.
[0] You can still see the proud large “Swiss made” in the relevant section of their website (https://www.iqair.com/products/air-quality-monitors). Furthermore, if you Google the question, the LLM-generated answer suggests that
> The IQAir AirVisual Pro air quality monitor is Swiss-designed and manufactured. While IQAir is headquartered in Switzerland, their manufacturing facilities for air purifiers, including the AirVisual Pro, are located in both Switzerland and Southern Germany
which is not true.
[1] Perhaps someone can comment on that; I don’t see SenseAir sensors listed on https://www.aqmd.gov/aq-spec/evaluations/summary-table.