daemin 11 hours ago

I read this article and then looked at my Github and a few other projects and found no issues created by Copilot. As someone else has said they need to be triggered manually, so therefore it's the same sort of problem as with the Curl project bug bounty, where people would be spamming with automatically LLM generated fictional problems. In that case because there's a potential for money to be made, and in the Github copilot case because I guess they're trying to contribute to open source for whatever reason.

As far as Visual Studio Code goes, I've not really used it much but it makes sense since it's Microsoft's free editor, so you will be a product and you will be marketed to. I do use Visual Studio though, and it does show Copilot in the UI by default, but there is an option to "hide Copilot" from the UI which does what is advertised. I will probably remove my important projects from Github though, but mainly so they are not used for LLM training than anything else.

  • pvtmert 5 hours ago

    Whether or not Github themselves create these issues or pull-requests, some bunch of folks will do that (manually) for sure. I mean the Hacktoberfest is coming soon, so is the low-quality typo-fixes. Since now there is Claude-Code, Cursor et. al, I am really curious how people are gonna fight with the pull-request spam. Especially open-source projects which claim they do not accept LLM generated content.

    P.S: Most people just do it either to "light-up" their Github profile for job applications or just to get cheap swag...

  • oefrha 10 hours ago

    Yeah, as a maintainer with fairly popular projects (at least more popular than any project from the linked issue reporters, I’ve checked), I’ve gotten exactly zero Copilot issue or PR. As for useless review comments, lol, nothing beats useless comments from users (+1, entitled complaints, random driveby review approvals serving god knows what purpose, etc.), you probably shouldn’t be doing open source if you’re annoyed by useless comments.

    And good luck stopping people from pasting from ChatGPT or Gemini or whatever. Those are free, unlike Copilot agent PRs which cost money, which is part of why I don’t see any.

    I guess some people just have too much time and will happily waste on useless complaints.

  • the__alchemist 10 hours ago

    Same experience. Does anyone have info on this discrepancy in observations?

    • daemin 10 hours ago

      I read this article after it was shared on social media by the Codeberg.org account so I though it was a PR piece, as it doesn't mention self hosting at all, just moving to another hosted platform.

    • TiredOfLife 4 hours ago

      The article is by theregister.com. Basically The Onion of tech media

benrutter 9 hours ago

Tangential, but I think github's secret weapon of inertia is. . .(drumroll) github stars.

They're still seen by a lot of people as a sign of project maturity and use. My unfounded suspicion is if they all dissapeared tomorrow, people would be a lot more likely to try alternative code forges.

I've been using codeberg of late, more because of their politics than anything, but in all honesty the user experience between github/gitlab/codeberg/sourcehut/gitea is near identical.

  • ivanjermakov 8 hours ago

    I think it's a lot harder of an OSS project not hosted on GitHub to find contributors and gain traction in general. Network effect, as always.

    • pornel 8 hours ago

      It depends where your contributors are coming from. For example for Rust, the crates index is the discovery mechanism. Contributors will come to your repo by whatever link you put in your package's metadata. I've split my Rust packages between GitHub and GitLab and don't see a difference in participation.

    • LtWorf 2 hours ago

      It's the way I want it to be honest. Keeps the low effort garbage away for the moment.

  • jazzyjackson 5 hours ago

    I never understood going by stars when there's a much stronger signal in how many issues are being tracked and closed. Very easy to see if its software people actually use

    • zahlman 4 hours ago

      > a much stronger signal in how many issues are being tracked and closed

      This is a strong signal, but what it signals is confused. How much of it is the nature of the user base in actually reporting issues? Suppose the project receives regular fixes and issues are promptly closed on average — how much of that is because the project has to constantly respond to external factors, and how much is due to developers doing constant fire-fighting on an intrinsically poor design and not getting around to new functionality? Suppose there are lots of outstanding issues — how many of them are effectively duplicates that nobody's bothered to close yet?

    • dvfjsdhgfv 2 hours ago

      Yeah, this is the first thing I check, and also I verify a few closed ones to understand how they were handled.

  • wiether 7 hours ago

    You can add two more things:

    - 2000 minutes of free compute time with GitHub Actions

    - free Docker Hub alternative with unlimited pulling (they say that you're limited to 500Mb but I currently have probably +20Gb of images on my Free account)

    They have the community aspect AND the freebies

  • IshKebab 8 hours ago

    It's one factor but I think they have more important "secret weapons":

    1. Network effects; people already have an account.

    2. Free CI, especially free Mac and Windows CI.

  • LtWorf 2 hours ago

    There are websites to buy stars :D It's like fake reviews.

fatchan 10 hours ago

Github is my push --mirror location, nothing more. Main is a popular Gitlab instance gitgud.io, and I host my own secondary mirror.

Gitlab is of course adding more AI and corpo garbage, and once they prevent disabling these "features" on community editions we'll see a fork of gitlab, probably.

The assertion that github is some bustling hub of opportunity is a strange one. At best you get people more likely to contribute because they already signed up, and a contribution from somebody not willing to sign up to another free service or simply email you an issue report is a contribution worth missing.

  • clickety_clack 7 hours ago

    Yep, the headline on the Gitlab landing page is now “Build software, not toolchains. With native AI at every step.”

    I’d love to find a stripped down solution that focused on hosting code repos. I don’t think GitHub see it as their core business anymore.

  • skydhash 9 hours ago

    I think it’s mostly people around the JS/Go/Rust ecosystems that tend to be vocal about GitHub being a community. For a lot of projects I couldn’t care less if it was just cgit or gitea.

    It’s quite easy to setup git to send patch via email. And you can always use a pastebin to host the diff if you’re sharing ideas. Bit I guess that’s not as visible as the GitHub dashboard.

3np 9 hours ago

I've had an ongoing support ticket with GH for several months now asking them to actually disable Copilot, as there is Copilot all over and it's clear from inlined JSON on github.com pages when signed in that my account is actually opted in to Copilot features despite Settings page saying features should be disabled. I've never ever opted in to anything related to GH AI and am not a vscode user.

They keep closing the ticket and saying it's "with the engineering team". I keep reopening and asking for resolution, escalation, or progress.

GitHub did have working and professional support in the past but in 2025 they are just malicious.

It's surreal.

  • e40 9 hours ago

    Please point to the ticket so we can add to your voice.

  • progval 9 hours ago

    Do you have an example of "inlined JSON on github.com pages"? I can't imagine what this looks like.

    • 3np 9 hours ago

      Not near an account right now but literally just "View Source" when signed in and search for "copilot" and it's there along some other feature-flags for the user in a JSON blob inside a script tag.

      • 63stack 5 hours ago

        Not sure why this is downvoted, I just checked and I do see a json object inside a <script type="application/json" id="client-env"> tag, that has all kinds of copilot related keys.

        I checked my profile and copilot is enabled with a "lock" icon, I cannot disable it. I have never enabled it.

AlexandrB 10 hours ago

I find it weird how companies talk out both sides of their mouth on AI. On the one hand it's this magical tool that will make you 10x more efficient at your job, and on the other it's something they have to market heavily and shove in your face at every turn - sometimes outright forcing you to engage with it. These two things don't seem compatible - if the tool was that good people would be beating down their doors to get it.

  • hyperpape 9 hours ago

    From Dan Luu (https://danluu.com/wat/):

    > When I joined this company, my team didn't use version control for months and it was a real fight to get everyone to use version control. Although I won that fight, I lost the fight to get people to run a build, let alone run tests, before checking in, so the build is broken multiple times per day. When I mentioned that I thought this was a problem for our productivity, I was told that it's fine because it affects everyone equally. Since the only thing that mattered was my stack ranked productivity, so I shouldn't care that it impacts the entire team, the fact that it's normal for everyone means that there's no cause for concern.

    Do not underestimate the ability of developers to ignore good ideas. I am not going to argue that AI is as good as version control. Version control is a more important idea than AI. I sometimes want to argue it's the most important idea in software engineering.

    All I'm saying is that you can't assume that good ideas will (quickly) win. Your argument that AI isn't valuable is invalid, whether or not your conclusion is true.

    P.S. Dan Luu wrote that in 2015, and it may have been a company that he already left. Version control has mostly won. Still, post 2020, I talked to a friend, whose organization did use git, but their deployed software still didn't correspond to any version checked into git, because they were manually rebuilding components and copying them to production servers piecemeal.

    • rossdavidh 9 hours ago

      All true, but the argument for AI is that it makes you far more productive as an individual, which if true should be an easy sell. In fact, some developers are quite committed to it, with a fervor I've not seen since the "I'm never going back to the office" fervor a few years ago. Version control is more of a "short term pain for long term gain" kind of concept; it is not surprising some people were hard to convince. But "AI" promises increased productivity as an individual, in the here and now. It should not be a hard sell if people found it to work as advertised.

      • crazygringo 8 hours ago

        > it makes you far more productive as an individual, which if true should be an easy sell

        Writing unit tests where needed makes you more productive in the long run. Writing in modern languages makes you more productive. Remember how people writing assembly thought compiled languages would rot your brain!

        But people just resist change and new ways of doing things. They don't care about actual productivity, they care about feeling productive with the tools they already know.

        It's a hard sell when an application moves a button! People don't like change. Change is always a hard sell to a lot of people, even when it benefits them.

        • jacobolus 5 hours ago

          To the contrary, people resist change for good reasons: changes to tools rob attention and focus from the work, often for completely arbitrary or decorative reasons. Sometimes changes remove or break important aspects of the tool and force someone to waste time developing a new workflow which is, on average, no better than the previous one. It is vanishingly rare that the software team making the changes in question did sufficiently rigorous testing to show that the new version is a net "benefit" for most users of the software; they don't have time for that. All too often, no significant group of users was even consulted about the changes, which were made for reasons like advancing someone's career ("shipped X feature changes") or looking different for the sake of marketing something merely re-arranged as new ("the old style was so 2018").

          The teams making changes to software are, on average, moderately worse than the teams who originally developed the software, if only because they missed out on the early development experience, and often don't fully understand the context and reasons for the original design and don't reason from first principles when making updates, but copy the aspects they notice superficially while undermining the principles they were originally established on.

          Even when the changes are independently advantageous, it is common for changes to one part of a system to gratuitously break a variety of other parts that are dependent on it. Trying to manage and fix a complex web of inter-dependent software which is constantly changing and breaking is an overwhelming challenge for individual humans, and unfortunately often not a sufficient priority for groups and organizations.

        • ThrowawayR2 5 hours ago

          > "Remember how people writing assembly thought compiled languages would rot your brain!"

          No, I don't remember that and I've been around awhile. (I'm sure one could find a handful of examples of people saying that but one can find examples of people saying sincerely that the earth is flat.) It was generally understood that the code emitted by early, simple compilers on early CISC processors wasn't nearly as good as hand-tuned assembly code but that the trade-off could be worthwhile. Eventually, compilers did get good enough to reduce the cases where hand-tuned assembly could make a difference to essentially nothing but this was identified through benchmarking by the people who used assembly the most themselves.

          If you want to sell us on change, please stop lying right to our faces.

          • compiler-guy 5 hours ago

            Note also that it took, more or less, a hardware revolution in the form of RISC, to make compilers able to compete. A big piece of the RISC philosophy was to make it easier for compiler writers.

            They eventually got there, (and I expect AI will eventually get there too), but it took a lot of evolution.

            • tines 5 hours ago

              Really? X86 isn’t RISC and it ruled the world during, not before, the time of compilers.

              • hyperman1 3 hours ago

                Starting with the 386, the ISA got a lot more compiler friendly. Up to 286, each register had a specialised task (AX,CX,DX,BX means Accumulator, Count, Data,Base register). Instructions worked with specific regs only (xlat, loop). When 386 and 32 bits happened, the instructions became more generic and easily combinable with any register. I remember people raving over the power of the SIB byte or the possibility to multiply any pair of register. While not RISC, it got clearly more easy for compilers to work with the ISA, and I remember reading in magazines that this was an explicit design intention.

              • compiler-guy 5 hours ago

                Lots of x86 assembly out there from that time period. Beating the compiler in the eighties and nineties was a bit of a hobby and lots of people could do it.

                Modern ISA designers (including those evolving the x86_64 ISA) absolutely take into account just how easy it is for a compiler to target their new instructions. x86 in modern times has a lot of RISC influence once you get past instruction decode.

        • non_aligned 5 hours ago

          > Writing unit tests where needed makes you more productive in the long run.

          Debatable? It has positive effects for organizations and for the society, but from a selfish point of view, you gain relatively little from writing tests. In your own code, a test might save you debugging time once in a blue moon, but the gains are almost certainly offset by the considerable effort of writing a comprehensive suite of tests in the first place.

          Again, it's prudent to have tests for more altruistic reasons, but individual productivity probably ain't it.

          > Writing in modern languages makes you more productive.

          With two big caveats. First, for every successful modern language that actually makes you more productive, there's 20 that make waves on HN but turn out to be duds. So some reluctance is rational. Otherwise, you end up wasting time learning dead-end languages over and over again.

          Second, it's perfectly reasonable to say that Rust or whatever makes an average programmer more productive, but it won't necessarily make a programmer with 30 years of C++ experience more productive. This is simply because it will take them a long time to unlearn old habits and reach the same level of mastery in the new thing.

          My point is, you can view these through the prism of rational thinking, not stubbornness. In a corporate setting, the interests of the many might override the preferences of the few. But if you're an open-source developer and don't want to use $new_thing, I don't think we have the moral high ground to force you.

          • hamburglar 4 hours ago

            > In your own code, a test might save you debugging time once in a blue moon

            It’s much more than this. You feel it when you make a change and you are super confident you don’t have to do a bunch of testing to make sure everything still behaves correctly. This is the main thing good automated tests get you.

          • genghisjahn 4 hours ago

            What are 20 dud languages that have been hyped on HN? Not meaning to snark, serious question.

        • compiler-guy 5 hours ago

          Early compilers really did suck. They were long term big wins for sure, but it wasn't unreasonable for someone who was really good at hand assembly, on tightly constrained systems, to think they could beat the compiler at metrics that mattered.

          Compilers did get better, and continue to--just look at my username. But in the early days one could make very strong, very reasonable, cases for sticking with assembly.

        • haskellshill 7 hours ago

          > Remember how people writing assembly thought compiled languages would rot your brain!

          Well, how'd you describe web apps of today if not precisely brainrot?

          > They don't care about actual productivity, they care about feeling productive

          Funny you'd say that, because that describes a large portion of "AI coders". Sure they pump out a lot of lines of code, and it might even work initially, but in the long run it's hardly more productive.

          > It's a hard sell when an application moves a button!

          Because usually that is just change for the sake of change. How many updates are there every day that add nothing at all? More than updates that actually add something useful, at least.

        • bigstrat2003 4 hours ago

          > Change is always a hard sell to a lot of people, even when it benefits them.

          You're assuming that the change is beneficial to people when you say this, but more often than not that just isn't true. Most of the time, change in software doesn't benefit people. Software companies love to move stuff around just to look busy, ruin features that were working just fine, add user hostile things (like forcing Copilot on people!), etc. It should be no surprise that users are sick of it.

      • krinchan 8 hours ago

        As someone who started out a GenAI skeptic, I’ve found the truth is in the middle.

        I write a TON of one off scripts now at work. For instance, if I fight with a Splunk query for more than five minutes, I’ll just export the entire time frame in question and have GHCP (work mandates we use only GHCP) spit out a Python script that gets me what I want.

        I use it with our internal MCP tools to review pull requests. It surfaces questions I didn’t think to ask about half the time.

        I don’t know that it makes me more productive, but it definitely makes me more attentive. It works great for brainstorming design ideas.

        The code generation isn’t entirely slop either. For the vast majority of corporate devs below Principal, it’s better than what they write and its basic CRUD code. So that’s where all the hyper productive magical claims come from. I spend most of my days lately bailing these folks out of a dead end fox hole GHCP led them into.

        Unfortunately, it’s very much a huge time sink in another way. I’ve seen a pretty linear growth in M365 Copilot surfacing 5 year old word documents to managers resulting in big emails of outdated GenAI slop that would be best summarized as “I have no clue what I’m talking about and I’m going to make a terrible technical decision that we already decided against.”

        • clickety_clack 7 hours ago

          What is GHCP?

          • hydhyd 7 hours ago

            It appears to be GitHub Copilot

            • clickety_clack 7 hours ago

              Ah! I was trying to fit 4 words into the acronym, like “GitHub Hosting Cloud Platform” or something.

    • dcminter 8 hours ago

      It's an excellent point - but a lot of the pressure to use AI in orgs is top-down and I've never seen that with useful tech tools before; they always percolated outward from the more adventurous developers. This makes me wary of the AI enthusiasm, even though I acknowledge that there is some genuine value here.

      • bwfan123 7 hours ago

        I felt the same way. The analogy I use is management dictating the tech-stack to use across the org. It does not make any sense ! They need to stay in their lanes, and let engineering teams decide what is best for their work.

        Big tech's general strategy is get-big-fast - and then become too-big-to-fail. This was followed by facebook, uber, paypal, etc. The idea is to embed AI into daily behaviors of people whether they like it or not, and hook them. Then, once hooked, developers will clamor for it whether it is useful or not.

      • crazygringo 8 hours ago

        > I've never seen that with useful tech tools before

        I've seen it all the time. Version control, code review, unit testing, all of these are top-down.

        Tech tools like git instead of CVS and Subversion, or Node instead of Java, may be bottom-up. But practices are very much top-down, so I see AI fitting the pattern very well here. It feels very similar to code review in terms of the degree to which it changes developer practices.

        • dcminter 7 hours ago

          Nope, all of those things were dev driven until they'd diffused out as far as management and only then did they start getting enforced top-down. Often in awful enterprise software ways actually.

          • crazygringo 6 hours ago

            But that's what I'm saying.

            Obviously developers invented these things and initially diffused the knowledge.

            But you're agreeing with me that they then got enforced top-down. Just like AI. AI isn't new or different like this. Developers started using LLM's for coding, it "diffused" so management became aware, and then it becomes enforced.

            There's a top-down mandate to use version control or unit testing or code review or LLM's. Despite plenty of developers initially hating unit tests. Initially hating code review. These things are all standard now, but weren't for a long time.

            In contrast to things like "use git not Subversion" where management doesn't care, they just want you to use a version control.

            • dcminter 6 hours ago

              Sigh, enforced is always top down, sure, if you want to be pedantic. But normally the process starts with enthusiastic devs, propagates out through other devs until a consensus is reached (e.g. source control is the only sane way) and then management starts to enforce it - often with a crappy enterprise take on the basic idea (I'm looking at you IBM Team Connection and Microsoft Visual SourceSafe).

              AI seems to have primarily been pushed top-down from management long before any consensus has been reached from the devs on what it's even good for.

              This is unusual; I suspect the reason is that (for once) the tech is more suitable for management functions than the dev stuff. Judging from the amount of bulletpointese generation and condensation I've seen lately anyway.

              • crazygringo 4 hours ago

                It's not pedantic, it's the very issue being discussed.

                And there have been plenty of enthusiastic devs regarding LLM's.

                And the idea that "until a consensus is reached" is just not true. These practices are often adopted with 1/3 of devs on board and 2/3 against. The whole point of top-down directives is that they're necessary because there isn't broad consensus among employees.

                It was the same thing with mobile-first. A lot of devs hated it while others evangelized it, but management would impose it and it made phones usable for a ton of things that had previously been difficult. On the balance, it was a helpful paradigm shift imposed top-down even if it sometimes went overboard.

                • dcminter 3 hours ago

                  Do you know a lot of devs who, having tried VCS, were against it?

                  • crazygringo 2 hours ago

                    I lived through the transition, so absolutely.

                    Early VCS was clunky and slow. If one dev checked out some files, another dev couldn't work on them. People wouldn't check them back in quickly, they'd "hoard" them. Then merges introduced all sorts of tooling difficulties.

                    People's contributions were now centrally tracked and could be easily turned into metrics, and people worried (sometimes correctly) management would weaponize this.

                    It was seen by many as a top-down bureaucratic Big Brother mandate that slowed things down for no good reason and interfered with developers' autonomy and productivity. Or even if it had some value, it wasn't worth the price devs paid in using it.

                    This attitude wasn't universal of course. Other devs thought it was a necessary and helpful tool. But the point is that tons of devs were against it.

                    It really wasn't until git that VCS became "cool", with a feeling of being developer-led rather than management-led. But even then there was significant resistance to its new complexity, in how complicated it was to reason about its distributed nature, and the difficulty of its interface.

            • watwut 6 hours ago

              No, not just like AI. The difference is that these things were pushed by people on the bottom for years and run successfully before management top caught up. Like, years and years.

              AI does not have such curve. It is top down, from the start.

        • watwut 7 hours ago

          None of them was top down in companies I worked in at the time. They were all stuff developers read about and then pressured management and peers to start using.

          Management caught up and started to talk about them only years later.

          • dcminter 6 hours ago

            Yeah, RCS was what, early 80s? Devs I knew were mostly on CVS by mid 90s and around the time Subversion became common (late 90s) things like PVCS and Visual Source Safe were starting to be required by management. Perhaps a bit earlier with super technical orgs. That's a much more typical flow.

      • nlawalker 7 hours ago

        I think it's coming from both places, it's just that the top-down exhortations are so loud and insistent.

        I wasn't around to experience it but my understanding is that this is what happened in the 90's with object oriented programming - it was a legitimately useful idea that had some real grassroots traction and good uses, but it got sold to non-technical leadership as a silver bullet for productivity through reuse.

        The problem then, as it is now, is that developer productivity is hard to measure, so if management gets sold on something that's "guaranteed" to boost it, it becomes a mandate and a proxy measure.

        • bink 6 hours ago

          I think that's a good comparison. I was around in the 90s and I do remember OOP being pushed by all sorts of people who weren't coders. It was being pushed as the "proper" way to code regardless of the language, size, platform, or purpose of the program in question.

      • kace91 8 hours ago

        We might be in the rare case where the current smoke and mirrors fad in leadership happens to be something actually useful.

        Let’s not let the smoke and mirrors dictate how we use the tool, but let us also not dismiss the tool just because it’s causing a fad.

        • dcminter 6 hours ago

          I'm wary rather than skeptical I think. There's clearly value here. Whether we're paying the true costs or not, however, won't be clear until all the VC fumes have cleared.

          Much like the internet era actually - obviously loads of value, but picking out the pets.coms from the amazon.coms ... well, it wasn't clear at the time which was which; probably both really (we buy our petfood online) except that only one of them had the cash reserves to make it past the dot com crash.

      • groby_b 7 hours ago

        AI is the first dev tool that makes a difference that is immediately noticeable even for higher layers, that's why they apply pressure.

        The core problem, as OP called out, is change aversion. It's just that for many previous useful changes, management couldn't immediately see the usefulness, or there would've been pressure too.

        Let's not forget that well-defined development processes with things like CI/CD, testing, etc only became widespread after DORA made the positive impact clearly visible.

        Let's face it: Most humans are perfectly fine with the status quo, whatever the status quo. The outward percolation of good ideas is limited unless a forcing function is applied.

        • whateveracct 5 hours ago

          Execs can suddenly reliably measure productivity? Or does AI just give them the easier to measure, short term benefits.

    • hn_throwaway_99 6 hours ago

      Surprisingly enough, and pretty ironic given this discussion is about GitHub, the company Dan Luu is talking about there is Microsoft (specifically the SmartNIC team), based on his Linked description of his 2015-2016 job.

    • bgwalter 8 hours ago

      Version control has quickly won. It was so popular that people kept writing new systems all the time. CI was popular. Most major open source projects had their own CI systems before GitHub.

      "AI" on the other hand is shoved down people's throats by management and by those who profit from in in some way. There is nothing organic about it.

      • hyperpape 7 hours ago

        Version control is almost 50 years old. It has very slowly won.

        AI adoption is, for better or worse, voluntarily or not, very fast compared to other technologies.

        • Calavar 5 hours ago

          Version control took a while because most early version control systems were brittle and had poor developer UX. Once we got mercurial and git and nice web UIs, the transition was actually pretty fast IMHO.

          The same could be true for coding agents too, or maybe not. Time will tell.

        • jabwd 7 hours ago

          .... which is the problem here. The internet took decades. The iPhone didn't change anything this quickly either. We're seeing massive brain rot in many studies, no real world data that actual shows productivity gains.

          This adoption rate / shoving is insane. It is not based on anything but dollars.

          • utyop22 6 hours ago

            The way I think of it is the difference between financial wealth and real wealth.

            No new real wealth can be created but financial wealth may transfer from the firms buying stuff to the large tech firms - thereby creating new financial wealth for big tech stockholders. In the long run the two should converge - but in the short run they can diverge. And I think that’s what we are seeing.

        • therein 5 hours ago

          The fervor with which some feel the need to defend AI is what is incredible. Adoption, innovation, impact, not so much.

          The attempt to compare it with Version Control, with sliced bread, with plumbing and sanitization practices. Think of any big innovation and compare it with it until people give in and accept this is the biggest bestest thing ever to have happened and it is spreading like wildfire.

          Even AI wouldn't defend itself this passionately but it conquered some people's hearts and minds.

    • CamperBob2 6 hours ago

      Sounds like a company full of seriously-terrible developers, from which no valid general conclusions can be drawn.

      I use AI a lot myself, but being forced to incorporate it into my workflow is a nonstarter. I'd actively fight against that. It's not even remotely the same thing as fighting source control adoption in general, or refusing to test code before checking it in.

    • sys_64738 9 hours ago

      > but their deployed software still didn't correspond to any version checked into git, because they were manually rebuilding components and copying them to production servers piecemeal.

      Programmers are not trustworthy which is why you need a layer of protection which might seem like dead money (release engineering). But REs controlling production prevents programmers from their deceitful practices to production environments.

      • hluska 9 hours ago

        What the fuck?

  • jsheard 8 hours ago

    GitHub isn't even the worst example of this at Microsoft, they didn't just force AI on Office users but also tricked them into paying extra for it. They unilaterally switched all personal and family accounts over to AI-enabled plans that were 30-40% more expensive, and hid the option to revert back to the old plan such that it's only offered as a last resort if you try to cancel your subscription.

  • tho2342o349423 10 hours ago

    At this point, pretty much all of the US markets (and the USD) is hinging on "unlimited upside" promised by techbros and their magic AGIs & robots. They probably get orders from all the way up the food chain to keep the show going.

    Wonder what'll happen to JPY once the Yen-carry unwinds from this massive hype-cycle - will probably hit 70 JPY to the dollar! Currently Sony Bank in Japan offers USD time-deposits at 8% pa. - that's just insanely high for what is supposed to be a stable developed economy.

    • chubot 9 hours ago

      They probably get orders from all the way up the food chain to keep the show going.

      Honestly I think the same thing happened with self-driving cars ~10 years ago.

      Larry Page and Google's "submarine" marketing convinced investors and CEOs of automakers and tech companies [1] that they were going to become obsolete, and that Google would be taking all that profit.

      In 2016, GM acquired Cruise for $1 billion or so. It seems like the whole thing was cancelled in 2023, written off, and the CEO was let go

      How much profit is Waymo making now? I'm pretty sure it's $0. And they've probably gone through hundreds of billions in funding

      How's Tesla Autopilot doing? Larry also "negatively inspired" Elon to start OpenAI with other people

      I think if investors/CEOs/automakers had known how it was going to turn out, and how much money they were going to lose 10 years later, they might not have jumped on the FOMO train

      But it turns out that AI is a plausible "magic box" that you extrapolate all sorts of economic consequences from

      (on the other hand, hype cycles aren't necessarily bad; they're probably necessary to get things done. But I also think this one is masking the fact that software is getting worse and more user hostile at the same time. Probably one of the best ways to increase AI adoption is to make the underlying software more user hostile.)

      [1] I think even Apple did some kind of self-driving car thing at one point.

      • bookofjoe 9 hours ago

        Apple car project

        https://en.wikipedia.org/wiki/Apple_car_project

        >From 2014 until 2024, Apple undertook a research and development effort to develop an electric and self-driving car,[1] codenamed "Project Titan".[2][3] Apple never openly discussed any of its automotive research,[4] but around 5,000 employees were reported to be working on the project as of 2018.[5] In May 2018, Apple reportedly partnered with Volkswagen to produce an autonomous employee shuttle van based on the T6 Transporter commercial vehicle platform.[6] In August 2018, the BBC reported that Apple had 66 road-registered driverless cars, with 111 drivers registered to operate those cars.[7] In 2020, it was believed that Apple was still working on self-driving related hardware, software and service as a potential product, instead of actual Apple-branded cars.[8] In December 2020, Reuters reported that Apple was planning on a possible launch date of 2024,[9] but analyst Ming-Chi Kuo claimed it would not be launched before 2025 and might not be launched until 2028 or later.[10]

        In February 2024, Apple executives canceled their plans to release the autonomous electric vehicle, instead shifting resources on the project to the company's generative artificial intelligence efforts.[11][12] The project had reportedly cost the company over $1 billion per year, with other parts of Apple collaborating and costing hundreds of millions of dollars in additional spend. Additionally, over 600 employees were laid off due to the cancellation of the project.[13]

        • andrepd 8 hours ago

          [flagged]

          • ch4s3 7 hours ago

            Ahh yes, capitalists noteworthy haters of building trains. If you ignore the private companies that built the NYC subway system(s), all of US freight rail, and invented trains.

            • andrepd 6 hours ago

              It's no coincidence all your examples are over 100 years old lol. The rise of the individual automobile (a very lucrative business proposition in many aspects) was done alongside massive sabotage of competing alternatives, with the disastrous consequences that are today plain.

              It's also no coincidence America has built no rail in many decades while centrally planned China built a massive HSR network in the past 15 years.

      • chubot 9 hours ago

        Also, I think Hacker News mostly believed the hype about self-driving cars, with relatively little pushback. Many people were influenced by what the CEOs/investors said, and of course the prospect of jobs and "cool tech"

        e.g. in 2018, over 7 years ago, I was simply pointing out that people like Chris Urmson (who had WORKED ON self-driving for decades) and Bill Gurley said self-driving would take 25+ years to deploy (which seems totally accurate now)

        https://news.ycombinator.com/item?id=16353541

        And I got significant pushback

        Actually I remember some in-person conversations with MUCH MORE push back than that, including from some close friends.

        They believed things because they were told by the media it would happen

        People told me in 2018 that their 16 year old would not need to learn how to drive, etc. (In 2025, self-driving is not available in even ONE of their end points for a trip, let alone two end points)

        Likewise, at least some people are convinced now that "coding as a job is going away" -- some people are even deathly depressed about it

        • pessimizer 7 hours ago

          Hacker News fell for f'n 3D TVs.

          Hacker News goes for anything that they think they might be able to make money off of, just like all middle-class people. They evaluate events based on how they could affect them personally. Actual plausibility isn't even secondary, they simply defer to the salesmen (whom they admire and hope one day to be.)

        • hluska 9 hours ago

          I get that you have a bone to pick but replying to yourself isn’t a good look. Sometimes rage gets such that people stop reading - I stopped reading you many paragraphs ago. We get that you were correct - would you like a cookie?

          • chubot 8 hours ago

            I don't need a cookie, because I simply read what an experienced engineer and an investor with skin in the game said literally IN THE NEWSPAPER

            and then wrote it on Hacker News, yet people didn't believe it

            Probably because they were told otherwise by influential people

            And again these aren't only HN conversations; people I know well argued in person HARD against 25 years for autonomy -- because they were told otherwise

            Marketing works -- it changes people's beliefs

            It's funny that in 2025 you still think that 25 years is "totally insane"

            • scns 5 hours ago

              People believe what they want. Ergo they side with the people who state that those beliefs are true, makes them feel good. Listening to experts who know better and oppose the held beliefs makes the believers feel bad, ergo they won't believe them.

              Sometimes it is good to disregard the opinion of experts who are absolutely sure something can't be done, might by a prerequisite to making it happen.

              The four minute mile comes to mind.

              Beliefs are powerful, they can enable you to reach goals, become prisons of the mind trapping you or become delusions when feedback is disregarded.

      • rozab 6 hours ago

        I recently watched Not Just Bikes' video on the disastrous future side effects of self-driving cars[0]. Of course it made me think about the massive PR push that made us think they were around the corner, but also about the manufactured consent for these technologies in the first place. Right now this kind of discussion is hitting the mainstream with the 'clanker'[1] backlash. I think it's really obvious to a lot of people that the AI push is not organic and is not based around consumer needs, and this manipulation is making people genuinely angry[2] (ok jreg is a performance artist, but just because something is performative doesn't mean it's not real).

        [0]: https://youtu.be/040ejWnFkj0?si=7yI3eKkirJdTWPwR [1]: https://en.wikipedia.org/wiki/Clanker [2]: https://youtu.be/RpRRejhgtVI?si=aZUVcsY8VyR_jbBA

      • bee_rider 8 hours ago

        Wonder how far along we’d be on the path to self driving without any hype cycles…

        I suspect stuff like lane following assist and adaptive cruise control

        1) will ultimately provide the path to self driving eventually

        2) wasn’t particularly helped by the hype cycle

        1 is impossible to say at his point, for 2 I guess somebody who works in the field can come along and correct me.

        • marcosdumay 6 hours ago

          There are commercial self-driving cab services operating in a couple dozen cities right now.

          That's where we are.

      • AlotOfReading 5 hours ago

            In 2016, GM acquired Cruise for $1 billion or so. It seems like the whole thing was cancelled in 2023, written off, and the CEO was let go
        
        It was shut down because they had a collision that made front page news across the country which was followed by a cover-up. Their production lines were shut down, all revenue operations ceased, and the permits they needed to operate were withdrawn. It's not like the decision was random.

            How much profit is Waymo making now? I'm pretty sure it's $0. 
        
        Profit is a fuzzy concept for even the most transparent private companies, but Waymo's revenue is likely in the hundreds of millions. They've received around $12B in funding, not hundreds of billions.
        • fragmede 5 hours ago

          Waymo's done 10M rides*. If we hand wave $10 per ride, that's $100M. Which is way more money than I have, but not all that much. It's still much bigger than $0 though!

          * https://www.cbtnews.com/waymo-hits-10m-driverless-rides-eyes...

          • AlotOfReading 4 hours ago

            That number was originally announced in May. They've completed an additional 3M rides since then if they've done no additional scaling. Their average fares are also closer to $15-20 than $10.

      • jibe 6 hours ago

        GM has 6 month attention span, abandoning self driving is suicidal short term thinking.

        Waymo has been slow and steady, and has built something pretty great.

      • Rover222 9 hours ago

        Not really your main point, but Tesla self driving is quite incredible, despite what internet clickbait says. They have a clear path to full autonomy with vision-only systems.

        But yeah, certainly 5-7 years behind the initial schedule. Which I guess was more of your point.

        • karlshea 8 hours ago

          You’re still falling for it. They have a clear path to vision-only autonomy IN THE BAY AREA.

          Let’s see it work in Minnesota in the winter where you can’t see lane markings, everything is white, and the camera lenses immediately get covered with road salt spray.

          • bink 6 hours ago

            Heck, I'm concerned how they're going to work in the Bay Area after an earthquake or cell network outage.

          • Rover222 4 hours ago

            Yeah I'm falling for it by using it every day in 95% of my driving. You're falling for media stories.

          • j45 7 hours ago

            For now.

            It's important to not confuse activity, with progress, with results.

            At the same time, it's important to not confuse or downplay results, with progress, with activity.

            There seems to be activity, progress, and results. It seems to be speeding up.

            I don't have any preference for or against Tesla. Just observing.

            • jcgrillo 7 hours ago

              > For now.

              What can incremental progress do to make a camera see through road salt deposited on its lens? I call bullshit. There isn't any incremental path because it's not physically possible. The photons are stopped by the salt. No amount of "AI" or what the fuck ever else will change this. There is no path towards "progress" here.

              • Rover222 4 hours ago

                You'll eat your words much sooner than you think. The cameras don't need much clarity to work effectively (they work quite well in intense rain). The main forward camera is behind the windshield already.

              • paradox460 7 hours ago

                Just for the sake of argument, they could use spinning lenses like you do on a camera in inclement weather

                • jcgrillo 6 hours ago

                  Yeah or some sort of washer/wiper system, but there's much better, safer technology for this. They could just use it.

                  • Rover222 4 hours ago

                    The front bumper cameras already have a spray wash

              • j45 6 hours ago

                My understanding is lenses should be inside the windshield, and a system should not oeprate if it can't see.

                I don't operate from an assumption that cameras will remain the same as they are today.

                Your comment did remind me about Comma, though.

                https://comma.ai/

        • chubot 9 hours ago

          OK, but I think it will end up being more than 25 years behind schedule, taking into account the claims

          which is what people like Chris Urmson and Bill Gurley already said prior to 2018 (see my sibling comment)

          https://en.wikipedia.org/wiki/List_of_predictions_for_autono...

          We're going to end up with complete autonomy

          Ultimately you'll be able to summon your car anywhere … your car can get to you. I think that within two years, you'll be able to summon your car from across the country

          ---

          (Also, in 2018 I said I'd be the first to buy a car where I could sleep behind the wheel while going from SF to Portland or LA. That obviously doesn't exist now.

          Anyone want to take a bet on whether this will be possible in 2032, 7 years from now? I'd bet NO, but we can check in 2032 :-) )

          • Rover222 8 hours ago

            Teslas are already driving an hour alone to deliver themselves from the factory.

          • bgwalter 8 hours ago

            It will coincide with the Year of the Linux Desktop.

          • hluska 8 hours ago

            25 years is far fetched. Again, you obviously have a bone to pick because HN disagreed with you, but this obsession with yours is such that you’re no longer making sense. 25 years?? Totally insane.

            • goku12 7 hours ago

              To be fair, I don't see any sort of technical arguments on either side to justify their claims. You need some clue about its internal design and its current state to make an educated guess about the expected development time. Without that, 25 years is as valid a guess as 5 years. But I won't dismiss any claims outright. I'm all ears if anyone has any explanation to offer.

    • bwfan123 7 hours ago

      I would say the same of the following:

      1) crypto: raise funding, buy crypto as collateral, raise more funding with said collateral, rinse and repeat.

      2) gpu datacenters: raise funding, buy gpus as collateral, raise more funding, buy more gpus, rinse and repeat.

      3) zero day options: average folks want a daily lottery thrill. rinse and repeat.

      All of the above are fed by fomo and to some extent hype, and ripe for a reckoning.

    • bookofjoe 9 hours ago

      FWIW when I lived in Japan in 1968-69 it was 360 JPY to the dollar. I felt like a millionaire!

  • alphazard 8 hours ago

    Organizations, once they reach a certain size, are usually not self consistent. Organizations are made up of people, and each person wants different things and has different incentives. It takes an excellent leader to make an organization appear consistent, it's not the default at all.

    Marketers are trying to keep their jobs, sales people are trying to keep their jobs, etc.

  • pylua 9 hours ago

    Is it really that weird that a company would speak from both sides of their mouth ? That is essentially what corporate speak is. That should be the assumed default when a major company says anything.

  • mhh__ 9 hours ago

    I bet this was true of computers back in the day too. The processes that are native to computers are magical but adding computers to the old is actually quite bad e.g. paperwork is better done on paper

    • skydhash 9 hours ago

      You bet wrong. Computers were pricey enough that if you want one, you have to really need it to justify the price. It was not forced on any business.

      • mhh__ 9 hours ago

        How long ago? I was born after the pedants Millenium so I'm expecting my definition of back in the day is different to yours

        I think my time frame is firmly after the invention of excel but before the web was it's own thing

      • warmedcookie 9 hours ago

        Yep, electronics in general too. People today complain about GPU prices, but that was the norm for everything electronic related.

      • brabel 8 hours ago

        What are you talking about?? There was lots of resistance to computers at office jobs. Even through the 90s lots of people were still avoiding moving away from the old way and companies had to spend lots of money on training because without that people quickly reverted back to their old ways! I remember that and was taught how to provide such training and how to convince people to adopt new tech! It’s always been a challenge.

  • cmiles74 9 hours ago

    The less I know about a thing the more useful an LLM seems to be. I’m working with a new-to-me enterprise code base, the LLM helps me find related (and duplicate) code. Even here it’s usefulness has an expiration date, eventually I’ll know where stuff lives and I’ll use it less and less. Life experience tells me I’m not unique and I suspect the constant cram-AI-into-the-thing is because the vendors are hoping, eventually, they’ll find a use-case for LLMs that sticks.

    • giancarlostoro 9 hours ago

      This is the correct way. Use the LLM dont let it become the only way you work.

    • immibis 9 hours ago

      Yes, they've legitimately good at a few things (e.g. very fuzzy search), but that doesn't justify the amount of investment.

      • goku12 7 hours ago

        The environmental damage is even worse than the investment. Those who have the money to invest in it usually care only about the returns and not the environment.

        • tempodox 7 hours ago

          Hardly anyone even mentions it in discussions. Even as far as these tools are actually useful, nobody ever asks whether it’s worth the environmental costs.

          • cshores 6 hours ago

            For Google's Gemini LLM, the energy impact is negligible, with the average prompt consuming the equivalent energy of just three seconds of a microwave's operation.

            • tempodox 3 hours ago

              All those data centers full of GPUs aren’t running on solar or wind power.

              • cshores 2 hours ago

                I did a bit of research on the environmental impact with regards to the United States. Recent numbers suggest that ChatGPT handles about 2.5 billion prompts per day worldwide, with roughly 330 million of those coming from the United States. Since the U.S. population is about 335 million, that works out to about one prompt per person per day on average, though actual users issue several times more.

                On the energy side, Google recently estimated that an average Gemini inference consumes around 0.24 Wh, which is roughly the same as running a microwave for a single second. Older rule-of-thumb comparisons put the figure closer to 3–6 seconds of microwave use, or about 0.8–1.7 Wh per prompt. If you apply those numbers to U.S. usage, you get somewhere between 79 MWh and 550 MWh per day nationally, which translates to only a few to a few dozen megawatts of continuous load. Spread across the population, that works out to between 0.09 and 0.6 kWh per person per year — just pennies worth of electricity, comparable to a few minutes of running a clothes dryer. The bigger concern for the grid is not individual prompts but the growth of AI data centers and the energy cost of training ever-larger models.

            • immibis 4 hours ago

              How much energy is otherwise consumed by a Google search?

              • cshores 2 hours ago

                I’m not entirely sure, since it seems that a very slimmed-down version of Gemini has been attached to search. It’s definitely not the full Gemini 2.5-Pro that engineers use to carefully reason through answers. Instead, it relies mostly on tool calling to stitch together a response.

  • garyfirestorm 7 hours ago

    I would like to point out that in certain scenarios people are not very smart. For eg. Many car enthusiasts who care about speed and 0-60 will still look down on EVs despite EVs being ridiculously fast and cheap to attain those metrics. 40k EV is faster than 150k Porsche. But these guys will never adopt it.

    • vluft 7 hours ago

      that's not mostly what car enthusiasts care about, especially somebody buying a 150k porsche; they care about handling and road feel and a fat pig of an EV will never match a lighter car (well set up) on that; you can't beat physics when you're slinging that much weight; even set up as well as can be, a 5000lb taycan doesn't come close to the handling feel of a 3250lb 911.

      • hedora 6 hours ago

        A BMW i3 weighs about 3000 lbs. They’re mostly fiberglass and have a small battery. The center of gravity is probably comparable to the 911 even though it’s tall and goofy looking (all the weight is in the battery, under the seats).

        The tire geometry causes a bit of oversteering, but they generally corner well, etc.

  • bigstrat2003 6 hours ago

    You're 100% correct. People love tools which make them more productive. If AI was actually as good as the companies pushing it claim it is, they wouldn't have to push it.

  • mrandish 4 hours ago

    > if the tool was that good people would be beating down their doors to get it.

    Yes! "Forced features" are a misguided effort to drive internal usage metrics. There are other ways to let users know about new features, short of forcing it on them obnoxiously.

    • johndhi 4 hours ago

      I think investors like to see an adoption. So companies force users to say it - and then brag that their users love their AI so much they're all using it.

      It's a rather perverse cycle.

  • yunwal 9 hours ago

    The other explanation is that it’s everywhere because AI pushers would like to integrate it with everything in order for it to be its most useful. Most of them don’t own your OS and your password manager, so they push it instead into a million different little places.

    Doesn’t change the fact that it’s stupid, annoying, and bad design, but I don’t know that outright deception is needed to explain it.

  • nativeit 6 hours ago

    Yes, this is what’s happening. I think it pretty well speaks for itself. It’s almost entirely hype. It’s got significant utility, just nowhere near enough utility to justify its astronomical costs and wastefulness.

  • tempodox 9 hours ago

    That’s what you get for making yourself dependent on profit-driven entities on a one-sided basis (namely that they have all the power and you have all the risk). Of course they will force the stuff on you that they hope will make them the most profit, up to the limit where you’d eat the switching costs and run away.

    • throwawa14223 7 hours ago

      Firefox also shipped their battery draining ai feature and they don’t have identical motives.

  • madeofpalk 5 hours ago

    I don’t think a company marketing a product/feature is an indicator that it’s bad.

  • pkaeding 9 hours ago

    They aren't really talking out of both sides, it is just all full-court-press marketing.

  • progval 9 hours ago

    Or they believe that people are too stupid to understand how good their product is.

  • redox99 9 hours ago

    Why wouldn't they do everything in their power to increase customer base?

    • goku12 7 hours ago

      I don't think that this is an effective strategy to achieve that. At some point, the same customer base is going to feel the AI fatigue and yearn for something cleaner.

  • delusional 10 hours ago

    The critical realization to connect those two ideas is that they don't believe in what they tell you. They are telling you what _needs_ to be true for them to be geniuses.

  • j45 8 hours ago

    Just like someone who can figure out how to write code to solve a problem can do it while other programmers say it's not possible and just do it some other way, the same is true of AI.

    One of the major issues I'm seeing is how much technical people haven't been involved in the application of AI, which leaves non-technical people to pontificate and try.

    With any new tech, after the hype is gone, what remains, is adopted and used.

    The internet, social media, smartphones, all seemed foreign.

    LLMs are no different. They will solve things other things haven't before.

    LLMs are only as good as the users using it. Users only get better at using AI but putting in the repetitions. It's not a tool that's alive or a psychic.

    • nativeit 6 hours ago

      LLMs are very different. All of the other things you mentioned found wide, eager adoption among the broader public within two years.

      • nativeit 5 hours ago

        They were/are also profitable.

    • cshores 6 hours ago

      I generally agree, but I think there is a real disconnect. Middle and upper management often do not understand how developers and engineers are actually supposed to use these tools.

      For example, I work in operations, so most of what I touch is bash, Ansible, Terraform, GitHub workflows and actions, and some Python. Recently, our development team demonstrated a proposed strategy to use GitHub Copilot: assign it a JIRA ticket, let it generate code within our repos, and then have it automatically come back with a pull request.

      That approach makes sense if you are building web or client-side applications. My team, however, focuses on infrastructure configuration code. It is software in the sense that we are managing discrete components that interact, but not in a way where you can simply hand off a task, run tests, and expect a PR to appear.

      Large language models are more like asking a genie. Even if you give perfectly clear instructions, the result is often not exactly what you wanted. That is why I use Copilot and Gemini Code Assist in VS Code as assistive tools. I guide them step by step, and when they go off track, I can nudge them back in the right direction.

      To me, that highlights the gap between management’s expectations and the reality of how these tools actually work in practice.

  • _Algernon_ 8 hours ago

    Sunc cost fallacy. If they admit that people don't want it (at least enough to cover the costs of it) share holders will question all the investments into Copilot. So instead they push it down our throats.

  • gtsop 5 hours ago

    > if the tool was that good people would be beating down their doors to get it.

    Microsoft is a software company. They wouldn't have released this to begin with to have an extreme competitive edge!

  • kogasa240p 4 hours ago

    Because the current LLM hype bubble is due to Silicon Valley wanting another SaaS service to keep the VC money flowing.

  • dijit 9 hours ago

    I agree, if you’ll allow me to diatribe my thoughts about why this could be (without thinking that these are my actual firm opinions);

    Since right now there is an aire of competition, I would guess that these companies believe its winner-take-all, and are doing their “one monopoly to aid another” to get this market before theres another verb-leader (like chatgpt for llm, or google for search).

    It could also be that they think that people won’t know how good they are until they try it, that it has to be seen to be believed. So getting people to touch it is important to them.

    But, I think I agree with you, its so heavy handed that it makes me want to abandon the tools that force it on me.

  • chickenpotpie 8 hours ago

    When the shopping cart was first introduced to grocery stores, nobody wanted to use it. People preferred to continue lugging around heavy baskets rather than push a cart. Actors had to be hired to walk around the stores pushing them around to convince people it normal and valuable to use them.

    Sometimes people are resistant to use things that improve their life and have to be convinced to work in their own self interest.

    https://www.cnn.com/2022/05/14/business/grocery-shopping-car...

    • goku12 7 hours ago

      There are places around the world where shopping carts were introduced successfully without the accompanying actors to convince the customers to use it. The actual criteria must be whether the new addition boosts or hampers the customers' productivity, at least in the long run.

      When I first heard about git, I knew that it would be very useful in the future, even if I had to spend some time and effort in mastering it. Same with CI, project planners, release engineering, etc. Nobody had to convince me to use them. But AI just doesn't belong to that category, at least in my experience. It misses results that a simple web/site search reveals. And it makes mistakes or outright hallucinates in ways even junior developers don't. It's in an uncanny valley between the classic non-AI services and plain old manual effort with disadvantages of both and advantages of neither. Again, others may not agree with this experience. But it's definitely not unique to me. The net gain/loss that AI brings to this field is not clear. At least not yet.

Y_Y 11 hours ago

How on earth was Microsoft allowed to buy such a critical piece of tech infrastructure?

  • politelemon 10 hours ago

    It wasn't critical at that time.

    But then who made it critical over the intervening years? That's on us.

    It's easy to knee jerk on HN but let's try to do better than this.

    • gchamonlive 10 hours ago

      When Microsoft bought GH it was already the most popular forge by far, which is why it was bought in the first place.

      > But then who made it critical over the intervening years? That's on us.

      That's blaming the victim. The vast majority of the opensource projects were hosted on GH since before Microsoft's acquisition. I remember back in 2018 when my team made the decision to move from bitbucket to GitHub, the main consideration was the platform quality but also the community we were getting access to.

  • layer8 10 hours ago

    GitHub isn’t critical infrastructure, it’s only real USP is network effects.

    • transcriptase 10 hours ago

      If outages make headlines and stop whole companies in their tracks worldwide, that’s critical infrastructure, not just network effects.

      • netsharc 9 hours ago

        Gotta love the genius of creating a single point of failure out of a distributed (version control) system...

      • _Algernon_ 8 hours ago

        Git is designed so that you always have the full code you're working on copied to your local machine. Github being down for a short time from time to time should be only a minor inconvenience.

        • wiether 6 hours ago

          Sure, but GitHub is much more than a git repository. Otherwise companies wouldn't pay for it.

          As the centralized git repo, it allows devs to collaborate, by exchanging code/features, tracking issues and doing code reviews. It also provides dependencies management ("Package") and code building/shipping (GH Actions).

          Sure, if you usually spend one day or more writing code locally, you're fine. But if you work on multiple features a day, an outage, even of 30 minutes, can have a big impact on a company because of the multiplier effect on all the people affected.

      • ReptileMan 8 hours ago

        >and stop whole companies in their tracks worldwide

        This is a sign that their CTOs should be replaced. Not that github is critical.

      • Disposal8433 9 hours ago

        > If outages [...] stop whole companies in their tracks

        They should fucking learn how to code because no one in their right mind would depend on such an external service that can be easily replaced by cloning repos locally or using proxies like Artifactory. Even worse when you know that Microsoft is behind it.

        Yes, most companies don't have good practices and suck at maintaining a basic infrastructure, but it doesn't mean GitHub is the center of the internet. It's only a stupid git server with PRs.

        • yunwal 9 hours ago

          > It's only a stupid git server with PRs.

          I feel like you’re missing a few features here

          • Disposal8433 4 hours ago

            Which ones and what are those exclusive features that GitLab doesn't have?

  • airstrike 11 hours ago

    There is no law against that, so I'm not sure what you're suggesting.

    And git lives on regardless of GitHub

    • latexr 10 hours ago

      > There is no law against that

      Regulators can (and do) stop purchases which can be considered harmful to consumers. Just look at the Adobe/Figma deal.

      • bapak 10 hours ago

        If GitHub were to close tomorrow, you'd lose out on the social part temporarily, but there are effectively dozens of providers and solutions that could replace it.

        The same could not be said for Figma, where if lost, you'd end up looking at the company that tried to buy it. That's what those laws are for.

      • airstrike 10 hours ago

        No, Adobe/Figma was stopped because it would severely reduce competition in a market where there are already very few relevant players. That's all they can block.

  • andrewinardeer 11 hours ago

    Was GitHub really critical at time of purchase? Or has Microsoft turned it into critical infrastructure?

    • daemin 11 hours ago

      Even though Git is decentralised, people like having a simple client-server model for version control. So with Github being the most funded free Git hosting service it grew to being the biggest. They also built out the extra services on top of git hosting, the issue tracker, CI/CD, discussion board, integrated wiki, github-pages, etc.

      I would say all of those things were present before the acquisition, enough that Microsoft itself started to use the site for its own open source code hosting.

    • rs186 10 hours ago

      If you travel back to 2018 and ask random software engineers "are git and github developed and owned by the same company", a fair number of them would say yes, just like today.

    • diggan 10 hours ago

      > Was GitHub really critical at time of purchase?

      Do you think they would have bought it otherwise? Same for NPM, they got bought for huge sums of money because they were "critical" already.

    • oytis 11 hours ago

      It was the leading git storage at the time of acquisition, for many people synonymous with git itself

  • Spooky23 11 hours ago

    How on earth did anyone believe Microsoft was different this time?

    • diggan 10 hours ago

      They used Emojis and printed "Microsoft <3 Open Source" on posters for conferences, so clearly they really had changed...

    • reaperducer 10 hours ago

      How on earth did anyone believe Microsoft was different this time?

      There's a whole generation on HN who came up after Microsoft's worst phase, and have spent the last five years defending MS on this very forum.

      They're convinced that any bad thing Microsoft does is a "boomer" grudge, and will defend MS to the end.

      I hope I'm never so weak-minded that I tie my identity and allegiance to a trillion-dollar company. Or any company that I haven't founded, for that matter.

      • Spooky23 9 hours ago

        End of the day, PR works. Even in peak “friendly” Microsoft, they were hard nosed and noxious to negotiate with.

  • 9cb14c1ec0 8 hours ago

    Microsoft either owns or hosts on Azure a lot of critical pieces of tech infrastructure apart from just Github.

  • mvdtnz 10 hours ago

    "Critical"? "Infrastructure"? What do you think Github is?

    • chamomeal 9 hours ago

      Critical piece of tech infrastructure. Which is absolutely is.

      When GitHub goes down, the company I work at is pretty much kneecapped for the duration of the outage. If you’re in the middle of a PR, waiting for GitHub actions, doing work in a codespace, or just need to pull/fetch/push changes before you can work, you’re just stuck!

      • 1over137 9 hours ago

        Wow. Why would your company do that? It's easy to self-host gitlab for example.

        • wiether 6 hours ago

          It's probably easy to self-host Gitlab for a small team working on a limited number of projects.

          It's definitely not easy to self-host Gitlab for hundreds of devs working of hundreds of projects. Especially if you use it as your CI/CD pipeline, because now you have to also manage your workers.

          Why company chose to pay GitHub instead of self-hosting their Gitlab instance? For the same reason they pay Microsoft for their emails instead of self-hosting them.

    • zbentley 10 hours ago

      Among other things, a CDN. If it were to take a sustained outage, lots of important online systems would stop working shortly thereafter. And I’m not talking about developer tools; bigger sites/apps than you think are reliant on GH being up. Stupid to do that, sure, but widespread.

  • dboreham 9 hours ago

    Who would disallow them to do so?

djoldman 10 hours ago

> The second most popular discussion – where popularity is measured in upvotes – is a bug report that seeks a fix for the inability of users to disable Copilot code reviews.

From the discussion:

> Allow us to block Copilot-generated issues (and PRs) from our own repositories

> ... This says to me that github will soon start allowing github users to submit issues which they did not write themselves and were machine-generated. I would consider these issues/PRs to be both a waste of my time and a violation of my projects' code of conduct¹.

> Note: Because it appears that both issues and PRs written this way are posted by the "copilot" bot, a straightforward way to implement this would be if users could simply block the "copilot" bot. In my testing, it appears that you have special-cased "copilot" so that it is exempt from the block feature.

How does one see that a user, e.g. "chickenpants" submitted an issue or PR that was generated by "Copilot"? Isn't there only one creator?

crote 9 hours ago

I'm getting quite sick of how it is forced on you. It's not just yet-another useful feature, they are shoveling it into everything, and giving it the most prominent place possible.

I don't want AI getting in the way on Github. I don't want an unremovable AI button in my Office 365 mail client. I don't want to get nagging AI popups every. single. time. I open the GCP console.

A year or two I was ambivalent about AI, and willing to give it a try. These days? I actively hate it. Like all nagging ads: if you have to force it on me this badly, how can it be anything but complete garbage?

  • bflesch 8 hours ago

    It's mostly about making it easy for users to opt in so they can steal the data. The theoretical AI benefits only appear once the data is stolen (pinky promise).

  • anonymars 7 hours ago

    "Press alt-i to draft a message"

    Fuck off and leave me alone you distracting piece of shit

scuff3d 4 hours ago

I'm all for getting away from GitHub, but Codeberg is a terrible name. Sounds like it's owned by Scrooge McDuck.

WhyNotHugo 11 hours ago

> During Microsoft's July 30, 2025 earnings call, CEO Satya Nadella said GitHub Copilot continued to exhibit strong momentum and had reached 20 million users.

Considering that they force it upon users and user cannot disable it, this sounds like a worthless metric.

I get an email every month telling me that my Copilot access has been renewed for another month. I'm probably being counted amongst those 20M users.

I could stand at the train station and yell "Cthulhu is our saviour" all day and later claim that the word of Cthulhu reached thousands of people today.

  • bflesch 8 hours ago

    I agree. For years we had to read HN posts about why we should not use vanity metrics but now that the corporate tech bros are making money with AI head over fist the user count is the most important metric again.

  • zahlman 4 hours ago

    > I get an email every month telling me that my Copilot access has been renewed for another month.

    I don't; any ideas what's different?

  • dmd 11 hours ago

    GLENDOWER: I can call spirits from the vasty deep.

    HOTSPUR: Why, so can I, or so can any man; But will they come when you do call for them?

    • hodgesrm 10 hours ago

      Ah, one of my all-time favorite Shakespeare quotes. Another is:

      "How sharper than a serpent's tooth it is to have a thankless child!“

      My father used it frequently when we were kids. I found out decades later it was a quote from King Lear.

  • stefan_ 10 hours ago

    I don't think they tell the guy it's forced. They are at Soviet army levels of reporting, that's why you got people adding telemetry and copilot to Editor.

ants_everywhere 11 hours ago

People have been voluntarily letting Microsoft host their code for years now.

And before that they posted their open source code to a centralized site that wasn't open source.

This is one of those things where of course it was going to happen. GitHub was VC funded, they were going to either exit to a big company or try to become one.

Eventually the bill was going to come due and everyone knew this. You can choose to rely on VC subsidized services but the risk is you are still dependent on them when they switch things up.

  • acomjean 10 hours ago

    If I remember initially GitHub (before MS) was free for open source and pay for everyone else. It wasn’t an entirely new idea (source forge?) but it use git which was rising in popularity.

    I think GitHub added the “pull request” as a really useful add on to git and that really made it take off.

    Oddly I used selfhosted git at an academic institution. I liked it because it was set up to use “hooks” https://git-scm.com/book/en/v2/Customizing-Git-Git-Hooks after check ins. This became much harder when we were pushed off to a commercial host ( gitlab a git hub competitor)

    • goku12 7 hours ago

      > I think GitHub added the “pull request” as a really useful add on to git and that really made it take off.

      For the sake of correctness, the concept of pull requests was not introduced by Github. It already existed in git in the form of the 'request-pull' subcommand. The fundamental workflow is the same. You send the project maintainer a message requesting a pull of your changes from your own online clone repo. The difference is that the message was in the form of an email. Code reviews could be conducted using mails/mailing lists too.

      This is not the same as sending patches by email. But considering how people hate emails, I can see why it didn't catch on. However, Torvalds considered this implementation to be superior to Github's and once complained about the latter on Github itself [1].

      [1] https://github.com/torvalds/linux/pull/17#issuecomment-56546...

      • ycombinatrix 4 hours ago

        I still don't get the line wrapping hangup. Doesn't every modern text editor have an option to auto wrap existing text? Why should I manually limit text to an arbitrary 72 character width between newlines?

      • high_priest 6 hours ago

        I am stunned by the fact at a 13+years old comment can be dug out, just like this, and presented as a valid argyment in a conversation.

        How some people, like you sir, are able to recall such minute events, is amazing.

        • goku12 5 hours ago

          > How some people, like you sir, are able to recall such minute events, is amazing.

          Oh! That's easy. I forgot that it is 13+ years old! XD

          Added later: Your comment made me look up more details about it. It was a widely discussed comment at the time. The HN discussion about it is as interesting as the comment itself [1].

          [1] https://news.ycombinator.com/item?id=3960876

    • diggan 9 hours ago

      > I think GitHub added the “pull request” as a really useful add on to git and that really made it take off.

      Personally, I remember the initial selling point of GitHub being that it was more "social" than any other forges at the time, since we were all wrapped up in the Web 2.0 hype and what not. I think they pushed that on their landing page back in the day too.

      It was basically Twitter but redone specifically for developers, and focus on code rather than random thoughts.

      • chamomeal 9 hours ago

        Honestly that sounds kinda neat and I guess you can see traces of that idea today: I have a million unread GitHub notifications about things I don’t care about

    • CodesInChaos 8 hours ago

      > If I remember initially GitHub (before MS) was free for open source and pay for everyone else

      When I started using it, public repositories were free, and private repositories needed a paid account.

      The ToS did not require public repos to be open source, only permission for basic operations like fork (the button which clones, not creating derivative works) and download was required.

    • hannob 7 hours ago

      > I think GitHub added the “pull request” as a really useful add on to git and that really made it take off.

      I'm pretty sure the term "pull request" existed before GitHub. (Meaning writing an email saying "I have changes in my copy of repo that I want you to merge into the main repo".) But GitHub put an UI around it, and they may've been the first to do that.

      • globular-toast 7 hours ago

        Can confirm. Pull request is something Linus talks about in the early days of git before he even acknowledged the existence of GitHub.

    • globular-toast 7 hours ago

      > I think GitHub added the “pull request” as a really useful add on to git and that really made it take off.

      Negative. The only thing GitHub added to the parlance is "forks" which are essentially like namespaced branches in the same repo.

  • sys_64738 9 hours ago

    Didn't sourceforge used to be the most friendly open source place then they did something which I forget which got them binned. I think the problem with Github is the low barrier to entry for another open source hosting entity if the typical MICROS~1 action of Embrace, Extend, Extinguish were practiced on GH.

    • apelapan 5 hours ago

      SorgeForge started injecting adware into binaries distributed via their platform. They had been on a downward trajectory for a while, but after they started doing that they fell of a cliff.

      I worked for a company that used the on-prem version of their forge back in the 00s, I remember liking it alot. It felt novel, cool and useful to have fully interlinked bug tracking, version control, documentation, project management and release management.

    • DeepYogurt 8 hours ago

      Yep. Sourceforge started injecting ads and malware into (at least) installers that devs made available on the site.

    • hannob 7 hours ago

      They started shipping installers with de-facto-malware, but at a time when they were already on a downward slope. It was many years after "sourceforge was the default place to host FOSS".

  • kelvinjps10 10 hours ago

    Wasn't GitHub initially bootstrapped?

    • diggan 10 hours ago

      Yes, there was a couple of years we all believed GitHub would eventually turn into a open platform made by and for FOSS, but then they took on VC investments after some years and the dream went into hiding again.

      • jraph 9 hours ago

        > we all believed GitHub would eventually turn into a open platform made by and for FOSS

        I really don't remember it like this at all. I do remember looking for actually open source forges and choosing Gitorious, which was then bought and shutdown by GitLab (and projects were offered to be seamlessly migrated, which worked well, and somehow we ended up being hosted on an open core platform, but that's another story).

        GitHub always looked like the closed platform the whole open source world somehow elected to trust for their code hosting despite being proprietary, and then there was this FOMO where if you weren't on GitHub, your open source software would not find contributors, which still seems to be going strong btw.

        I understand their was hope that GitHub would be open sourced, but I don't think there was any reason to believe it would happen.

        • diggan 9 hours ago

          > I understand their was hope that GitHub would be open sourced, but I don't think there was any reason to believe it would happen.

          Yeah, I don't think me myself had good reasons beyond "They seem like the good guys who won't sell out", but I was also way younger and more naive at that point (it was like 15 years ago after all).

          I think I mostly just drank the cool-aid of what you mentioned as "if you weren't on GitHub, your open source software would not find contributors". There was a lot of "We love Open Source and Open Source loves us" from GitHub that I guess was confusing to a young formative mind who wanted it to become like the projects they wanted to host. This hope was especially fueled when they started open sourcing parts of GitHub, like the Gollum stuff for rendering the wikis.

          • jraph 9 hours ago

            Fair enough, obviously!

            I suspect many people were in a similar situation.

    • everdrive 9 hours ago

      I'm not being flippant or contrary. What does "bootstrapped" mean in this context? I feel like I've never really understood it as a metaphor.

      • MangoToupe 8 hours ago

        It refers to pulling yourself up by your own bootstraps.

      • immibis 9 hours ago

        It means they paid for their expenses with their revenue, as opposed to venture capitalist investments.

  • gchamonlive 10 hours ago

    Even though you are right, that misses the point terribly.

    It's like using Instagram or Facebook. It's not at all a matter of individual choice when all your friends are on one single platform.

    Sure you can host your code anywhere, but by not using GitHub you are potentially missing out on a very vibrant community.

    It's all Microsoft to blame. It bought the medium and took an entire community hostage in the process just for the sake of profit.

    • layer8 10 hours ago

      While Microsoft is certainly to blame, GP is also right that the problem wouldn’t exist if people hadn’t continued en masse to have their code hosted on a centralized proprietary and (since 2012) VC-funded platform in the first place.

      As an aside, I don’t really see GitHub as a whole as a community. It’s a go-to place with network effects, but network effects doesn’t by itself imply “community”.

      • stavros 9 hours ago

        People will respond to incentives. They had an incentive to host their code in a place that easily let them do things that were extremely high friction before.

        People aren't morally reprehensible because they prefer convenience over hardship. People like using easy things, and they like making money. This means that people will make easy things so other people will give them money. If you don't like it, make easy things that work the way you like them, run them ethically, and don't sell them to anyone.

        • ants_everywhere 6 hours ago

          > People will respond to incentives....People aren't morally reprehensible because they prefer convenience over hardship

          To clarify my point isn't that anyone is morally reprehensible. My point is that using a free VC-backed service is like selling an implied option. You don't know when they're going to invoke the option, but eventually they will. And often it will be when you've gotten used to the income from selling the option.

          It's not a question of morality or judgment, it's just meant to be a description of what the game we're playing is.

          > If you don't like it, make easy things that work the way you like them, run them ethically, and don't sell them to anyone.

          I'm trying to

          • stavros 6 hours ago

            Sure, but nobody knows this. Everyone just thinks that things will be as they are now. I don't think the average developer knows what enshittification is, but Doctorow really nailed that one.

      • gchamonlive 10 hours ago

        Yeah, I said that first thing. It's right but it misses the point.

        Being VC backed isn't a deciding factor for adopting a forge. It's the community that drives adoption.

        > I don’t really see GitHub as a whole as a community.

        It's basically a social network on top of a source code forge. You have a profile that is individually identifiable, you can open issues and contribute to discussions on pull requests. All this can be tracked back to every individual while they collaborate and make connections while they contribute to each other. How is this not a community?

        • layer8 10 hours ago

          > Being VC backed isn't a deciding factor for adopting a forge. It's the community that drives adoption.

          OP is arguing that VC should be a deciding factor. The “community” wouldn’t exist if people had made that a deciding factor.

          A social network is not a community. It may contain many communities. GitHub has communities around projects. But GitHub as a whole isn’t a community.

          • gchamonlive 9 hours ago

            Ah yes, we agree. GH itself enables many communities to emerge but it itself isn't a community.

    • Mistletoe 9 hours ago

      > It bought the medium and took an entire community hostage in the process just for the sake of profit.

      Counterpoint is that is what companies are supposed to do. They are made to make money, the end. The only hope against this for humans is regulation, and that has fallen off the face of the earth. It’s like humans are doomed to repeat the late 19th and early 20th century era over and over.

      • generic92034 6 hours ago

        "Yes, the planet got destroyed. But for a beautiful moment in time we created a lot of value for shareholders!"

FredPret 8 hours ago

I’m a soon-to-be-ex-VSCode user, but seeing the long march of Copilot (Clippy 2.0, except it steals my code) at Github and now VSCode, I’m taking the plunge and learning Emacs.

kogasa240p 4 hours ago

Move to alternatives then, it won't be easy but it'll be worth it.

datavirtue an hour ago

These anti-agentic coding nut jobs are going to get themselves thrown out of the industry. STFU, some of us are working.

api 9 hours ago

What is the actual rationale behind some companies literally shoving AI down people’s throats?

It’s fascinating stuff and can be very useful. Why does it have to be rammed so hard? I’ve never quite seen anything like this.

Or maybe I have. It reminds me a little of the obviously astroturfed effort to ram crypto down people’s throats. But crypto was something most people didn’t have any actual utility for. A magic tireless junior intern who had memorized the entire Internet is actually useful.

  • marginalia_nu 9 hours ago

    KPIs are likely the missing part of the puzzle. CEO wants AI engagement to go up, organization makes AI engagement go up.

    If users don't want to engage with new AI features, the new AI features become unavoidable so that engagement goes up despite user preferences.

    KPIs are a fantastic way for an organization to lose any touch with reality and can drive some truly bizarre decision-making.

    • api 9 hours ago

      Ahh, the reason we lost the Vietnam war.

      https://en.m.wikipedia.org/wiki/McNamara_fallacy

      • marginalia_nu 9 hours ago

        "AI is the Vietnam of product management" is a blog post that almost writes itself. Not really my field so if anyone wants to take it for a spin, go ham.

        • api 9 hours ago

          Have ChatGPT write it

          This isn’t AI specific though. The whole industry runs this way because thoughtful decision making doesn’t scale easily. KPIs are easier.

          • marginalia_nu 7 hours ago

            Kinda weird though, like with many things in the industry we seem to be doing things that are far from optimal, but for some reasons organizations that do things differently aren't winning the competition.

            Ostensibly most successful software is written in languages that aren't very good, with development methodologies that aren't very good, in organizational structures that aren't very good. Where is the existence proof? Why isn't software written in a good language, using a good methodology with sane management winning the race?

            • grayhatter 7 hours ago

              because quality has never been the driver of survival. It has always survival. The race is a race of endurance, it's not a competition on merits. Humans would undoubtedly be better without scars, but the same anomaly that gave us scar tissue, gave us faster wound healing. Increasing survival. But scars don't go away, when it'd be better if they did. There's no selection pressure there because it's good enough. IBM still exists despite it's inability to make good decisions, it makes decisions that allow it to survive. It could just as easily make different decisions, and I would have named a different company. Not because it made different decisions, but because it survived. Why do companies that survive make these weird decisions? They don't, there is no back pressure. Decisions don't matter, survival does.

              The race isn't a competition, it's a death march. If you want to 'win' the death march, prioritize survival above everything else, especially quality and correctness.

              (I don't strictly follow this philosophy myself, a good engineer will always ask, why not both. Just make sure you identify endurance as the most important strategy)

              • marginalia_nu 6 hours ago

                This seems like an orthogonal concern. I don't see why quality and correctness would anti-correlate with survivability. I'll ask the same question again, out of all the highly survivable businesses, why are so many seemingly dysfunctional.

                • grayhatter 6 hours ago

                  That orthogonalality is my exact point. I believe you're correct; quality and correctness aren't negative pressures to survival. If anything, the should support survival, and I'd assume should also have a slight positive pressure on adoption/growth.

                  But I'd hope you'd admit quality and correctness aren't free attributes? They do have a cost. I can churn out low quality code way faster than I can produce code I'm proud of. If I attach myself to the quality of the code, and get stumped by some bug, become frustrated, and take a break from project_a, to work on something else, and while working on project_b to "clear my mind", I fall in love with project_b, or it gets more popular, or whatever that "pressure" happens to be... project_a has no remaining developers, and it is still dead now. Thus, quality has had a negative impact on it's survival.

                  Suitability has a tenuous connection and dependence on quality and correctness. (which I believe are synonyms for the same core idea?)

                  But why are so many businesses (the ones that still survive) so demoralizingly dysfunctional? Because they're run by individuals who don't value quality and correctness above [other attribute]. When given the choice to increase money (which is effectively the exact same thing as market share, and when talking about survival popularity is the same thing as suitability), or increase quality. They will always make the decision that ensures their survival, (by chance, no by intent, that's the orthogonality). Eventually, they'll turn that knob too far, degrade their quality enough and create an ecological niche for someone else to take over. (A competitor that maybe they acquire before it causes a real risk to it's survival/popularity, again choosing to make money/survive, over a decision targeting quality)

                  Would *you* rather make money, or write something high quality? I use and love marginalia, so I think I can guess the answer. (Thank you so much for building something that actually meaningfully improves the internet btw!) Are there decisions you could make that would trade the quality to become more popular, or make more money? Yes, I'm sure, but you don't seem to be trying to become the next google.

  • Traubenfuchs 9 hours ago

    Not old enough for MongoDB? Big Data?

    • api 9 hours ago

      I lived through that but it wasn’t like this. Hype isn’t the same thing as having something rammed down your throat with constant nag pop ups and dark patterns.

      • dboreham 9 hours ago

        It's more like the pop-under era.

knowitnone2 8 hours ago

isn't this typical Microsoft behavior? Just look at Windows and how many components are forced upon you. Try uninstalling it, it just comes right back. Most of it, you can't even uninstall. Some you can't even unpin. That's a monopoly for you. And they steal your documents by making OneDrive the default so they can train their AI. It's malicious.

zzzeek 9 hours ago

can someone explain to me if this is real? I run many high profile OSS projects that are all hosted on Github. I've yet to see any issues or PRs generated by AI, when PRs come in, I've never seen an AI code review pop in. I've seen maybe one or two people trying to answer discussion questions where they obviously used an LLM but that wasn't copilot, it was just individual people trying to be clever. Why am I not seeing this happen on my repos?

it's just the copilot popups that are hardcoded in vscode right now despite no extension being installed, that are very annoying and I'd like those to go away.

  • Leynos 6 hours ago

    I suspect it's either fantasy or fabrication.

throwaway94876 9 hours ago

Just put anybody who PRs AI slop to any repo on a big, collaborative blocklist so we can all block them and move on with our lives. They would be PRing AI slop with or without Copilot integrations anyway.

rdm_blackhole 8 hours ago

I am not sure if it's just me but the Github UI has become incredibly slow.

On bigger PRs, I regularly have diffs that take seconds to load. The actions also started hanging a lot more often and will run for 30 minutes stuck in some kind of loop unless they time out or I cancel them manually. This did not use to happen before or least not as frequently as now.

Finally when I try to cancel the hung actions, the cancel button never gets disabled after I click it and it is possible to click it multiple times without any effect. Once clicked, surely it shouldn't be possible to click it again unless the API calls failed.

Clearly there is a quality decrease happening here.

IshKebab 11 hours ago

Is this actually a real problem? Note it says "forced Copilot features" and

> the most popular community discussion in the past 12 months has been a request for a way to block Copilot, the company's AI service, from generating issues and pull requests in code repositories.

but Microsoft doesn't automatically make these issues and PRs. Users have to trigger it.

I mean, I do think you should be able to block the `copilot` user but I looked at this users repos and their most popular one has a total of 3 PRs with no Copilot ones.

I also checked the Rust compiler which is obviously waaaay more popular and it appears to have had zero copilot PRs.

  • madeofpalk 10 hours ago

    As a paid/commercial OSS maintainer, I haven't seen this from the public either. People occasionally submit low-effort PRs or issues, probably from Claude or ChatGPT or whatever, but I don't feel to bothered dealing with them. Of course, I'm fortunate enough to be paid for this.

    I think it's just an unfortunate fact now in 2025 that if you look after a text box online, you're going to have to deal with AI sludge in one way or another. If you don't want to do that, close the text box.

  • flykespice 10 hours ago

    > Is this actually a real problem?

    I mean if Microsoft is "training" on your source code without consent (and potentially violating licenses) , that is a huge problem.

    > I also checked the Rust compiler which is obviously waaaay more popular and it appears to have had zero copilot PRs

    How do you asess whether some PR was made by an AI(like the user did)?

    • IshKebab 9 hours ago

      > training

      Not what this is about.

      > How do you asess whether some PR was made by an AI(like the user did)?

      Searched for PRs authored by copilot or mentioning copilot.

      • flykespice 8 hours ago

        > Searched for PRs authored by copilot or mentioning copilot.

        So Github copilot forces your PR to tag them as coauthored by Copilot or users can be slick without mentioning it?

user214412412 10 hours ago

cant wait for the forced teams integration

trimethylpurine 8 hours ago

Is it on by default for end user repos? It appears to be disabled by default for organizations.

In the organization:

Organization -> Settings -> Copilot -> Access... Turn it off.

darepublic 5 hours ago

Seriously just screw Microsoft. Screw their ceo. Blatant parasites

RossBencina 11 hours ago

ctrl+shift+p > Chat: Hide AI Features

in vscode

LtWorf 11 hours ago
  • benrutter 9 hours ago

    Me too! I absolutely love it as a project, but I do miss finding and seeing the development of prihects I'm interested in there.

    Any tips for finding other interesting codeberg hosted projects?

  • chanux 6 hours ago

    I also brought a couple of projects there. However I'm not sure if AI crawlers are staying away from it.

    Also, I remember there was Radicle https://radicle.xyz

    Any Radicle users?

  • crabmusket 10 hours ago

    I just became a donor :)

    • LtWorf 10 hours ago

      Me too, but they host my stuff.

chris_wot 8 hours ago

This is very interesting... I've just learned about Codeberg. Does anyone have any info on it?

bgwalter 10 hours ago

Corporate overreach like this happens if most open source developers no longer speak up because they want to be hired or retain their positions. They delude themselves if they think that attitude provides them any security. The opposite is the case: corporations will use the sycophants, secretly laugh about them and fire them if they have served their purpose. As in the case of the Google and Microsoft firings of Python core developers.

agilob 7 hours ago

For me Copilot keeps commenting something like "this changes typo in a documentation". The comment is now blocking automerge of the PR, so I have more work. I have to go to the PR and mark the comment as resolved. Thanks AI, thanks Microsoft, fantastic job burning electricity for this. At least Bitcoin created some value ;)

  • danny_codes 7 hours ago

    lol bitcoin has not created any value. Well it has if you are a scammer or drug dealer I suppose.

    • daveguy 7 hours ago

      Just wait until people realize how little value Bitcoin has when people rush to the exits. Limited number of transactions, limited utility for the average person. We haven't had runs on banks in a long time in the US because of regulation. Crypto is a wild west. There will eventually be a Bitcoin exit trigger event and it will be brutal.

monegator 7 hours ago

Hear me here for a second: what if... what if, on every single bot post people just commented for the bot to go and suck a big fat.. you know what And then lock the conversation maybe the bot would learn something, eventually?

Sometimes i wonder why this hasn't happened anyway: Internet's not what it used to be.

And maybe companies would eventually take a clue, after they banned all actual contributors for having foul fingers (very unlikely i know)

thanks for attending my ted talk

hparadiz 11 hours ago

I recently got access to the premium version through work and was able to prototype something super legit in a language I don't really code in. It requires a heavy understanding of Linux and I had to rephrase my prompts in a "high level" way. However the result would have taken weeks and I was able to do it in a few days.

That said I would have a hard time justifying paying for it for my personal life because it's really that expensive. I look forward to 10 years from now when the local ML is good enough or free.