Google’s AutoML creates NASNet, an AI-designed vision model that surpasses human-crafted architectures
In a landmark advancement in artificial intelligence, researchers at Google Brain have developed AutoML, an AI system capable of designing other AI models. Using reinforcement learning, AutoML acts as a controller neural network that generates and evaluates child networks, optimizing them iteratively to perform specific tasks.
One of its most remarkable achievements, announced in mid-2017, is NASNet, a neural architecture specialized in image and video recognition. Trained and refined entirely by AutoML, NASNet has surpassed all human-designed systems in object recognition benchmarks. It achieved an impressive 82.7% top‑1 accuracy on the ImageNet dataset, exceeding previous records by 1.2%, and scored 43.1% mean Average Precision (mAP) on the COCO dataset—4% higher than its closest competitor.
Even a lightweight version of NASNet, designed for mobile platforms, outperformed similarly sized human-made models by 3.1%, demonstrating that AutoML is not only effective for large-scale AI design but also for resource-constrained environments.
In a move to foster open research, Google has released NASNet as open-source for image classification and object detection. This opens the door to widespread innovation across industries relying on computer vision, from autonomous vehicles to robotics and medical imaging.
However, the success of AutoML also raises critical questions about the future of AI development. As AI begins to design systems beyond human capabilities, concerns emerge regarding control, transparency, and ethical oversight. The potential for misuse in areas such as automated surveillance highlights the urgent need for robust regulation and global collaboration.
AutoML and NASNet mark a new era where machines may soon become the architects of increasingly powerful technologies. Ensuring these developments remain beneficial to humanity is now a central challenge for researchers, governments, and industry leaders alike.
Study : https://arxiv.org/abs/1707.07012
0 Comments