Is A.I. consciousness useful?

One of the scenarios for and AI achieving the illusion of consciousness is with a high speed multilayered neural network.  These are often laid out as row after row of neurons (really only 4, 5 or 6 rows since the computing complexity becomes extremely daunting) and “trained” on some input.  For instance, linking each dendrite to a pixel on the screen and reward or punish recognition of five pointed stars.  With increasing complexity, it is possible to build an entire artificial nervous system this way.

Now consciousness isn’t something we can well train for.  How do you train for something that we have but the faintest glimmering of what it is?  Does an AI even require consciousness to have intelligence?

Say we train the simple network to recognize the 5 pointed star.  If we dissect the network; maybe we can decipher it a bit to see a visualization of what it is able to recognize.  Do we have an understanding of what it means for the network to “visualize” it?  Is that what it is doing?  What is the program?  What is the algorithm for that?  How can we understand the semantics of the trained dendrites?   All we’ve done is encoded the data so that we “see” what the network has calculated as a recognized pattern.

Now what about something as complex as consciousness?  As syntax and semantics? As fear or love?  If the AI had all of those, could we decode it?  Could we understand it?

What is the point of building a conscious being if we still do not understand what it means to be conscious?  What can we learn?  Could we build one smart enough to tell us what it means and answer our fundamental questions that drive humanity to discover and search?

I think we will in fact learn quite a bit though sceptics will continue to  scoff since it is a mere machine.  Finding out how someone else thinks is delightful and educational.  A completely alien mind would offer new avenues for inquiry.  But it would, like everything else we discover, only lead to more questions.