• Help Support The Rugby Forum :

The technology thread

AI just regurgitates stuff that it scraped (usually by stealing) off the internet and uses it to form new sentences. It has more access to our posts on here than it does to the stuff we submit for work. I'm sure some places will have an in house AI that can replace low level grunt workers (think libraries and academia) but that's about it.
 
AI just regurgitates stuff that it scraped (usually by stealing) off the internet and uses it to form new sentences. It has more access to our posts on here than it does to the stuff we submit for work. I'm sure some places will have an in house AI that can replace low level grunt workers (think libraries and academia) but that's about it.
How many people and jobs though actually just recycle things they've heard in the past? Yes there is dome creativity, but even many ideas considered original are just repackaged versions of something else at this point.
 
How many people and jobs though actually just recycle things they've heard in the past? Yes there is dome creativity, but even many ideas considered original are just repackaged versions of something else at this point.
Oversimplification of the problem we all learn from what we've heard/done in the past. However humans and not AI can give weight based on the cumulative knowledge based on experience far outside of the problem to when thing go wrong and when they go right. They also can come up with solutions outside of the box given. There's a gap AI can't quite bridge and it needs to be told by a human how to bridge that gap.

Its a weird thing but I don't think people in general realise how much creative thought and judgement goes into their actions and decision.
 
But a game is very limited box of inputs and outputs so you create very defined limitations around it of what can or can't be done and a very definitive outcome. Once your outside that box or require subjectivity the entire thing falls apart.

Computing processing and complexity has increased massively in the last 30 years. What can theoretically be done and its limitations haven't changed much since Turing wrote what was required ~90 years ago.
I know but it's still early days. Go back a few decades and computers couldn't beat humans at chess, then they rapidly developed to beat humans at chess but it was a very limited environment as you mentioned and was done through brute forcing and programming in a load of tactics etc. Now we have computers that can learn how to play chess through nothing more than being told what the pieces can do, what the objective is and playing itself multiple times.

The game in question was a racing game with custom designed tracks. Yes it is still very limited but your average person at home can make an extremely basic AI that can learn and beat a human at something without being specifically programmed to do so. There are more professional AIs out there that can be trained to better diagnose cancers than doctors. This is what it can already do.

In 30 years time, I'd guess AI and robotics would be able to do quite a few jobs already done by humans largely autonomously. Humans likewise fail if the information they are given is poor or are put outside their comfort zone. Also the focus of your response seems to be things like Chat AI or imaging AI as opposed to other things going on. Anything that requires pattern recognition, an AI will likely be able to exceed humans either now or in the near future.
 
Oversimplification of the problem we all learn from what we've heard/done in the past. However humans and not AI can give weight based on the cumulative knowledge based on experience far outside of the problem to when thing go wrong and when they go right. They also can come up with solutions outside of the box given. There's a gap AI can't quite bridge and it needs to be told by a human how to bridge that gap.

Its a weird thing but I don't think people in general realise how much creative thought and judgement goes into their actions and decision.
Yes and no. Think how many years of experience humans go through from being a baby in order to learn how to deal with complex situations. Part of this is personal experience, but also knowledge passed on. Give AI enough time and enough experiences then it will surely start to choose the right response to the right situation.
 
I know but it's still early days. Go back a few decades and computers couldn't beat humans at chess, then they rapidly developed to beat humans at chess but it was a very limited environment as you mentioned and was done through brute forcing and programming in a load of tactics etc. Now we have computers that can learn how to play chess through nothing more than being told what the pieces can do, what the objective is and playing itself multiple times.

The game in question was a racing game with custom designed tracks. Yes it is still very limited but your average person at home can make an extremely basic AI that can learn and beat a human at something without being specifically programmed to do so. There are more professional AIs out there that can be trained to better diagnose cancers than doctors. This is what it can already do.

In 30 years time, I'd guess AI and robotics would be able to do quite a few jobs already done by humans largely autonomously. Humans likewise fail if the information they are given is poor or are put outside their comfort zone. Also the focus of your response seems to be things like Chat AI or imaging AI as opposed to other things going on. Anything that requires pattern recognition, an AI will likely be able to exceed humans either now or in the near future.
Pattern recognition in computers isn't a new thing at all though its exceeded human speed for a significant amount of time. You use that kind of technology everyday.

I'm limiting to chat/imaging AI because it appears to be the focus of what people think can be done its the emerging tech that is the current buzz.

Takes your chess example thats not learning its replacing the grunt work of programming the tactics but its what I mean by a limited box in clear inputs and outputs. Running through the permutations and giving answers this has been part of systems modelling for ages. All its doing is using a data subset its created from running those models and regurgitating the results.

You run a chess game, you log the moves, you log if you win. You repeat. Eventually you have a logs of all the possible board states and the moves to win and the amount of times making the next move wins giving optimum strategy. Its not learning its building a large enough dataset within a limited box. you just need computational to generate the dataset and then search through it. Its massive leap from there to say creating chess 2. Chess is also the perfect example of facsimile behaviours. Its is such a limited game in there there is no randomness its and an extremely constrained box with limitations. Its why it was used in the 90's to show the power of supercomputers.
Yes and no. Think how many years of experience humans go through from being a baby in order to learn how to deal with complex situations. Part of this is personal experience, but also knowledge passed on. Give AI enough time and enough experiences then it will surely start to choose the right response to the right situation.
How? Its a technical limitation nobody and I mean nobody has presented AI that is capable of a genuine creative thought. AI in it current state is regurgitation of information with randomness put in.
 
There are more professional AIs out there that can be trained to better diagnose cancers than doctors. This is what it can already do.
I listened to a podcast about this a while back, I think they said the AI was 100% on correct diagnosis, but the sticking point was they didn't know what criteria it was using to make the diagnosis.
 
I listened to a podcast about this a while back, I think they said the AI was 100% on correct diagnosis, but the sticking point was they didn't know what criteria it was using to make the diagnosis.
What tech bro's overinflating what their tech can do....say it isn't so!

Smoke and Mirrors over what you actually did.

"We can get cancer disgnosis 100% correct"
"within a specific limited dataset we got cancer 100% correct, please don't read out actual study beyond the PowerPoint presentation dear investor"
 
It was more the fact that if you can't work out what it's using to make that diagnosis that stood out for me.

Makes it difficult to really test its effectiveness if you don't know how it working it out.
 
Pattern recognition in computers isn't a new thing at all though its exceeded human speed for a significant amount of time. You use that kind of technology everyday.

I'm limiting to chat/imaging AI because it appears to be the focus of what people think can be done its the emerging tech that is the current buzz.

Takes your chess example thats not learning its replacing the grunt work of programming the tactics but its what I mean by a limited box in clear inputs and outputs. Running through the permutations and giving answers this has been part of systems modelling for ages. All its doing is using a data subset its created from running those models and regurgitating the results.

You run a chess game, you log the moves, you log if you win. You repeat. Eventually you have a logs of all the possible board states and the moves to win and the amount of times making the next move wins giving optimum strategy. Its not learning its building a large enough dataset within a limited box. you just need computational to generate the dataset and then search through it. Its massive leap from there to say creating chess 2. Chess is also the perfect example of facsimile behaviours. Its is such a limited game in there there is no randomness its and an extremely constrained box with limitations. Its why it was used in the 90's to show the power of supercomputers.

How? Its a technical limitation nobody and I mean nobody has presented AI that is capable of a genuine creative thought. AI in it current state is regurgitation of information with randomness put in.
Playing devil's advocate, isn't creativity just doing something no one has done before? There is no real 'creativity' just finding a new combination that no one else has. Humans just keep trying new combinations and eventually some work. It's the whole monkey with type writers scenario. In theory with enough time every possible combination can occur. By sheer number humans make progress now because somone somewhere will find the right combination. But for every genius moment of creation there are thousands of failures. Hasn't AI already found new combinations of antibiotic.


I'm sure I've read a couple of other examples.

Tbh it's not human creativity I think AI will struggle with, it's human stupidity where we fo something completely illogical. That would be AIs limit. An example, deep fried mars bar. It's a ridiculous concept that should never have worked, yet somehow actually became a thing.
 
isn't creativity just doing something no one has done before? There is no real 'creativity' just finding a new combination that no one else has.
That's a very complicated question. Depends on what exactly do we include into the notion "creativity". Under a "creativity" we can also mean "a deviation from standards/models/templates", therefore your example with "Al found new combinations of antibiotic" can be also an example of creativity (it didn't use already existing "models",but created its' own combinations). It's a bit like composing music, if you understand what I mean (I hope you understand, because for me it's a torture to explain smth in English🤦‍♀️)
But maybe I simplify things and don't really understand it...I don't know
Tbh it's not human creativity I think AI will struggle with, it's human stupidity where we fo something completely illogical
And here I agree
deep fried mars bar
Had to google it as I wasn't sure I understood correctly... jeez, what a perversion 🤢🤢🤢
 
That's a very complicated question. Depends on what exactly do we include into the notion "creativity". Under a "creativity" we can also mean "a deviation from standards/models/templates", therefore your example with "Al found new combinations of antibiotic" can be also an example of creativity (it didn't use already existing "models",but created its' own combinations). It's a bit like composing music, if you understand what I mean.
I was also thinking that another area where AI would struggle is subjectivity. It could analyse all the food in the world to create a perfect meal and humans are so subjective that they may not actually like it. Same with music, it could compose what it thinks is a perfect song, but humans end up not liking it.
 
I was also thinking that another area where AI would struggle is subjectivity. It could analyse all the food in the world to create a perfect meal and humans are so subjective that they may not actually like it. Same with music, it could compose what it thinks is a perfect song, but humans end up not liking it.
This is what is coming out of the creative space at the moment. This is great video explaining how when you actually dig into it how's it's actually failing at doing what you wanted.



This is current tech mind but it gives a good idea about how it's not just built to do things right.
 
It's not going to replace art. It'll assist in the tideous parts but that's it. AI will be better at more structural things.
 
This is what is coming out of the creative space at the moment. This is great video explaining how when you actually dig into it how's it's actually failing at doing what you wanted.



This is current tech mind but it gives a good idea about how it's not just built to do things right.

Yeah I agree with all of that, but that still doesn't mean human creativity is unique. It is at the moment, but I still think that in the future AI can be creative. The big, worrying question is will AI be able to go outside it's programming as it learns? I'm sure AI will definitely be able to do unexpected things because no matter how prescriptive the programming is, humans can't ever account for every single scenario. However, if AI did something contradictory to it's programming then that would be truly worrying.
 
Yeah I agree with all of that, but that still doesn't mean human creativity is unique. It is at the moment, but I still think that in the future AI can be creative. The big, worrying question is will AI be able to go outside it's programming as it learns? I'm sure AI will definitely be able to do unexpected things because no matter how prescriptive the programming is, humans can't ever account for every single scenario. However, if AI did something contradictory to it's programming then that would be truly worrying.
I don't think AI can ever go outside it's programming as it were, it can merely do stuff within it's programming that was not foreseen. What would be interesting is if you can ever have AI that can make AI. For example the basic AI I've seen work on a reward - punishment system and certain parameters are set to try to obtain the desired effect, which can then sometimes cause an unforeseen side effect (eg in an AI to play Pokemon, the AI was rewarded for the screen showing novel images (the intention being to reward exploration), what actually happened is NPCs moving around presented enough novel images that the AI sat in one location, eventually becoming locked as moving would not produce a novel image on the screen.)

What would happen if you made an AI that set the parameters of another AI with the aim of maximising it's performance? Alternatively AI that could produce code for a new, improved AI. It's getting to the idea of a "technological singularity", in which a machine can design it's own improvements, which then can also improve itself and so it goes on and on.
 
Top