Google releases Gemini Advanced as replacement for Bard

Google releases Gemini Advanced as replacement for Bard

Gemini Advance takes over Google Bard, featuring both a rebrand and an AI model upgrade.

Google issued a press note today telling us about a major AI update. Google bard is no more. It has been replaced and rebranded as Gemini. Google has also released their most capable and paid model Gemini Advanced as part of this and it will cost you $20 per month.

💡
With our Ultra 1.0 model, Gemini Advanced is far more capable at highly complex tasks like coding, logical reasoning, following nuanced instructions and collaborating on creative projects. Gemini Advanced not only allows you to have longer, more detailed conversations; it also better understands the context from your previous prompts.

Is Gemini Advanced good ?

I did try my hands at this service which is currently offers 2 months of trial period. From what I can see it is indeed more capable than their previous model and rivals latest ChatGPT models easily. Though for a proper comparison we would need to test both on specific benchmarks.

For example I chatted with Gemini Advanced and asked it to write an Android App for me. I started with simple prompts and then went ahead and asked it to add more complexity. The result was indeed very impressive/

import android.os.Bundle
import androidx.activity.ComponentActivity
import androidx.activity.compose.setContent
import androidx.compose.foundation.Image
import androidx.compose.foundation.layout.*
import androidx.compose.foundation.lazy.LazyColumn
import androidx.compose.foundation.lazy.items
import androidx.compose.material.*
import androidx.compose.runtime.*
import androidx.compose.ui.Alignment
import androidx.compose.ui.Modifier
import androidx.compose.ui.res.painterResource
import androidx.compose.ui.res.stringResource
import androidx.compose.ui.text.style.TextAlign
import androidx.compose.ui.tooling.preview.Preview
import androidx.compose.ui.unit.dp
import androidx.navigation.NavController
import androidx.navigation.compose.NavHost
import androidx.navigation.compose.composable
import androidx.navigation.compose.rememberNavController
import androidx.room.*
import com.google.code.gson.Gson
import com.squareup.okhttp3.Cache
import com.squareup.okhttp3.OkHttpClient
import com.squareup.okhttp3.logging.HttpLoggingInterceptor
import kotlinx.coroutines.launch
import retrofit2.Retrofit
import retrofit2.converter.gson.GsonConverterFactory
import retrofit2.http.GET
import java.io.File

// Data Structures
data class Book(
    @PrimaryKey val title: String,
    val coverImageUrl: String,
    val chapters: List<String>,
    val fetchTimestamp: Long = System.currentTimeMillis()
)

// Retrofit interface
interface BookApi {
    @GET("your_api_endpoint") 
    suspend fun getBooks(): List<Book>  
}

// Room Dao
@Dao
interface BookDao {
    @Query("SELECT * FROM Book")
    fun getAllBooks(): List<Book>

    @Insert(onConflict = OnConflictStrategy.REPLACE)
    fun insertAll(books: List<Book>)

    @Query("DELETE FROM Book WHERE fetchTimestamp < :expirationTime")
    fun deleteExpiredEntries(expirationTime: Long)
}

// Room Database
@Database(entities = [Book::class], version = 1)
abstract class BookDatabase : RoomDatabase() {
    abstract fun bookDao(): BookDao
}

@Composable
fun BookListApp() {
    val navController = rememberNavController()
    var isLoading by remember { mutableStateOf(false) }

    // Cache Setup
    val cacheSize = 10 * 1024 * 1024 // 10 MB
    val cacheDir = File(applicationContext.cacheDir, "http-cache")
    val cache = Cache(cacheDir, cacheSize)
    val loggingInterceptor = HttpLoggingInterceptor().setLevel(HttpLoggingInterceptor.Level.BASIC)

    // OkHttpClient setup
    val client = OkHttpClient.Builder()
        .cache(cache)
        .addInterceptor(loggingInterceptor)
        .addInterceptor { chain ->
            val original = chain.request()
            // Set cache headers... (adjust if needed)
            val cacheHeaderValue = if (isConnected) "public, max-age=2628000" else "public, only-if-cached, max-stale=2628000"
            val request = original.newBuilder().header("Cache-Control", cacheHeaderValue).build()
            chain.proceed(request)
        }
        .build()

    // Retrofit setup
    val gson = Gson()
    val retrofit = Retrofit.Builder()
        .baseUrl("https://your_base_api_url/") 
        .client(client)
        .addConverterFactory(GsonConverterFactory.create(gson)) 
        .build()
    val bookApi = retrofit.create(BookApi::class.java)

    // Room setup
    val database = Room.databaseBuilder(applicationContext, BookDatabase::class.java, "book-db").build()
    val cachedDuration = 2628000000L  // Approx 1 month in milliseconds

    // ... (rest of composables go here) ...
}

// ... (Your composables - BookListScreen, BookDetailsScreen, BookList, BookItem, BookChapterList) ...

@Preview
@Composable
fun DefaultPreview() {
    BookListApp()
}

It was not perfect as despite me telling it to merge all the code into a single file it still did not do it, though it correctly generated for individual functionality.

Image generation capability

I was underwhelmed by its image generation ability. While it generated good images, it failed to generate them in a different aspect ratio.

As usual you see the fingers aren't rendered well. But the image is pretty decent.

Why do AI models suck at generating fingers in images ?

AI models struggle with generating realistic fingers in images for several reasons:

  • Complexity of Hand Anatomy: Human hands are incredibly complex. We have 27 bones, numerous joints, and subtle musculature that allows for an extraordinary range of intricate movements and poses. It's challenging for AI to fully capture this complexity and translate it into a convincing image.

  • Limited Training Data: The quality of an AI model depends heavily on the data it's trained with. While vast image datasets exist, hands may not always be prominently featured or may exist in less usual perspectives. The model may lack sufficient references for how fingers should look in various positions, especially more complex ones.

  • 3D Understanding: Humans understand the three-dimensional structure of a hand intrinsically. Fingers wrap around objects, overlap, and change proportions when seen from different angles. Since AI models are often trained on 2D images, understanding and conveying this 3D structure realistically can be a major challenge.

  • Sensitivity to Errors: Humans are extremely perceptive about hands due to our own experience with them. Even small deviations from what looks 'right' about fingers can be immediately noticeable and jarring, breaking the illusion of realism.

How AI is Improving:

Despite these challenges, AI researchers are actively working on solutions to improve hand generation:

  • More Focused Datasets: Creating new image datasets specifically focused on hands in various poses and scenarios can lead to advancements.

  • 3D Modeling: Incorporating 3D modeling techniques for training helps the AI improve its understanding of hand structure and positions.

  • Better Understanding of Context: AI models are getting better at understanding the overall context of images. This includes comprehending how hands typically interact with other objects and environments, resulting in more realistic interactions in generated images.

While generating perfect hands still needs work, improvements are continually happening! If you're interested in delving deeper into this topic, feel free to ask.

Conclusion

Google has demonstrated that the company is capable of delivering cutting edge AI technology.

Did you find this article valuable?

Support AI Authority by becoming a sponsor. Any amount is appreciated!