![]() ![]() We also have a community award for individuals / teams /organizations adding value to the experimentation community on a broader scale.įeel free to tag anyone in the comments on this post who should be notified about the Experimentation Culture Awards 2023 case submission. You can submit cases from your own team or organization or other teams or organizations (looking at you agencies!). We are awarding the growth in experimentation culture, which could have been from nothing to something or from something to more. We are not awarding your current level of experimentation. ![]() The 2023 case submission form is open until April 25th, 2pm UTC. In an experimentation culture, you are free to try and fail or succeed, while the direction and the result of work is based on trustworthy gathered evidence. We do this by sharing inspirational stories of experimentation growth and nominating individuals, teams, and organizations that deserve recognition. The goal is to help organizations trying to grow an evidence-based decision-making culture by pointing them in the right direction. This is why I'm running the Experimentation Culture Awards for the 4th year again. Successful experimentation is about improving the process, structure, trustworthiness, democratization, and motivation for experimentation in organizations. Successful experimentation is not about the outcome of that one experiment. To advance AI safety, regulations around transparency and auditing would be more practical and make a bigger difference. Let's invest more in safety while we advance the technology, rather than stifle progress.Ī 6 month moratorium is not a practical proposal. The vast majority (sadly, not all) of AI teams take responsible AI and safety seriously. The popular press narrative that AI companies are running amok shipping unsafe code is not true. Responsible AI is important, and AI has risks. Having governments pause emerging technologies they don’t understand is anti-competitive, sets a terrible precedent, and is awful innovation policy. There is no realistic way to implement a moratorium and stop all teams from scaling up LLMs, unless governments step in. Lets balance the huge value AI is creating vs. I'm seeing many new applications in education, healthcare, food. The call for a 6 month moratorium on making AI progress beyond GPT-4 is a terrible idea. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |