tensor 4 days ago

It's unfortunate that these laws focus on AI. Why isn't it illegal to use traditional algorithms that have bias? Why aren't there regulations ensuring bias testing when humans themselves make decisions?

In all cases the bias originates from human behaviour. The one advantage of AI being used is that now that bias is surfaced. But it's always been there, which is why the machine learning algorithms learn it. It's just that no one typically looks at the data in aggregate prior to AI.

In any case, these rules should not be scoped to AI in my opinion. They should include algorithms and also require bias testing for teams of humans making decisions. If anything, I'd trust an AI system that has been analyzed for bias over a human making a decision.]

  • doe_eyes 4 days ago

    > In all cases the bias originates from human behaviour. The one advantage of AI being used is that now that bias is surfaced.

    I think that's an odd way to look at it. The bias originates from the training data, which was what - idle conversations on Reddit and other similar content pilfered on the internet? The bias is doing some harm in places like that, but we're now using that to make real decisions. That's... worse.

    • walt_grata 4 days ago

      I think the point is that bias happens in all human behavior, but with AI it's more visible than in the general population. Which I'd say is a positive, but I don't believe if it offsets the downsides.

      • doe_eyes 4 days ago

        But that's not a given. We conduct ourselves differently at work and in private life. Further, workplace processes are often designed to minimize bias. And here, we have a system that is trained on content that's two rungs above 4chan, and then coerced not to say racist things via RLHF. I don't think it's comparable at all.

    • tensor 4 days ago

      No, typically these sorts of models would not be trained on reddit, they would be trained on historical data that insurance companies and lending companies have. For example, a famous case is where they used AI to help decide if people will reoffend [1]. The bias here was already embedded in prior historical data. E.g. there is a level of systemic racism or bias built-in to the entire prison system.

      While HN is obsessed with LLMs, AI regulation targets more than just those systems. And yes, AI and machine learning were pervasive and used in a variety of useful settings long before these toy chat systems came about.

      Insurance and lending companies already have complex algorithms and formula they use to determine risk scores. Adding components of these systems that are learned from data shouldn't change the overall oversight mechanisms required of them in my opinion.

      [1] https://www.technologyreview.com/2019/01/21/137783/algorithm...

      • doe_eyes 4 days ago

        People are obsessed with LLMs because they can be applied to unstructured data and produce human-like outputs, so they are applicable to vastly more processes with vastly less oversight.

        The kinds of "AI" you're describing have been with us for decades, but LLMs and their ilk are a more consequential development. And if we're passing laws today, that's what they will be regulating for the most part.

        • tensor 3 days ago

          Which is what I'm say is wrong. If the oversight on insurance and other companies is lacking, it needs to be improved across the board not just with lip service to the popular technology of the day. The bias inherent is the exist risk scoring systems is absolutely equally if not more consequential than LLMs.

          Most of these regulations can replace "AI" with "technology, mathematical model, or human processes" and they will be 1000x better laws.

  • carbocation 4 days ago

    The behavior should be the start and the end of the regulation. No need to specify AI, algorithms, etc, at all.

  • ethbr1 4 days ago

    > Why isn't it illegal to use traditional algorithms that have bias? Why aren't there regulations ensuring bias testing when humans themselves make decisions?

    It is and there are, but rule-based systems and humans have traditionally been more limited.

    You'd better bet that health insurers continually get audited in terms of claim pay rates and correctness.

    The major difference is that AI systems can be pre-audited at minimal cost. In contrast, human auditing traditionally consumes billable processing time and so is done post-hoc on production results.

    Honestly, I'm excited about these types of regulations. I think systems with more transparency, testability, and accountability are going to be better for society as a whole.

    If it took "cost savings" + "scary AI" to get us there (digitization/automation and proactive outcome auditing respectively), that is what it is.

yuliyp 4 days ago

So this actually sounds like a realistic approach at managing the dangers of AI. It ensures that people have some sort of recourse against algorithms deciding to make their life miserable. This feels like an extension of similar approaches used in credit: credit rating agencies have to allow you to look at the data used, and are required to have flows for people to challenge the data there that may be harming them.

Certainly it's a very different approach from people trying to mandate that AIs must be designed in ways so that they can't be used for bad stuff (which to me feels like a fundamentally broken approach).

flaque 4 days ago

> Whether (people) get insurance, or what the rate for their insurance is, or legal decisions or employment decisions, whether you get fired or hired, could be up to an AI algorithm

This is a bit like trying to regulate horseshoes while everyone else is talking about speed limits & seat belts. Both parties say the word "carriage" and "passenger", but they have completely different ideas in their heads about what is about to happen.

jqpabc123 4 days ago

People make bad decisions too --- but they rarely do so alone. Try hiring someone in an established company without multiple levels of review.

The problem with AI is that we know these models are flawed but they are being implemented anyway in an effort to save money.

If you have to manually review all AI results, the cost savings start to evaporate. Particularly if it leads to lawsuits.

Imagine trying to explain in court how/why AI decided to fire someone.

The real culprit here is greed.

ForHackernews 4 days ago

Same thing the Colorado law that required employers to include salaries in job ads did: exclude Colorado residents.

  • soared 4 days ago

    I’ve never seen a job excluding Colorado residents, and I live in Denver and search for jobs.

    Employers just ignore it for remote work, or put a really big range. Big ranges are still useful since it’s always enough to make a decision on whether to apply or not.

    • Jtsummers 4 days ago

      When it first went into effect there were remote job listings that suddenly added exclusions for Coloradans. Also listings that were removed and re-added but with the exclusion added. My wife was in the middle of getting interviewed by Oracle when the law came into effect (she was on the last of several rounds of interviews), they dropped her and the listing and reposted it the next week barring people from Colorado from applying.

      I haven't seen this in a while, though.

      • soared 21 hours ago

        Yeah seems weird since oracle has a campus in Broomfield and office in DTC, but that company is a mess so doesn’t surprise me.

    • lelandfe 4 days ago

      I did see Colorado excluded, and even worked for a (Not Good) company that did it. I haven’t seen any examples this year, however, likely because California and New York followed suit.

  • dghlsakjg 4 days ago

    How are Colorado banks, landlords, and employers going to exclude their customers and employees?

    Right now the law just asks for disclosure and sets up a human appeal process for AI based decisions.

    • jmclnx 4 days ago

      Yes, for local small business jobs you are correct. But these laws could cause large multinationals to leave Colorado.

      Plus, in reality, banks really do not "need" tellers since most people get direct deposit, use debit/credit cards and phones for transactions. So large banks could leave too. I remember reading somewhere banks do not really like saving accounts since most people keep a small balance in them.

      With that said, I wish the laws Colorado has for AI and salaries could be nation wide.

      • dghlsakjg 3 days ago

        Do you think that large multinationals are really going to pull out of a market over having to update their disclosures in fine print?

        Reducing your applicant pool in a state you don’t operate in because you can’t be bothered to disclose salary is one thing. Closing down branches, losing customers and firing hundreds or thousands of employees in a move that reduces your bottom line is a whole other thing.

  • throwaway48476 4 days ago

    The ads I've seen now exclude multiple states. At some point it just gets silly.

geodel 4 days ago

For starters, catch the violators and fine them.