Why students should learn to build AI-enabled Android apps now

Mobile Development Is Not Declining — It Is Becoming the Edge of AI
There is an absolute myth floating around that mobile application development is somehow on the decline as a development technology to learn.
Nothing could be further from the truth.
With edge computing, the Internet of Things, and the rising need to make AI-powered intelligent applications available everywhere, learning to build apps with Android is one of the fastest lanes for new developers to learn the software engineering principles that put information intelligence in everyone’s pocket.
Back in the big days of IBM in the early 1990s, everyone was talking about “ubiquitous computing.” We did not fully know what it meant then, but it had that cool technical panache. It sounded like the future was coming, even if the shape of that future was still hidden in the fog.
Now the fog has lifted.
We are all living on the edge now.
Our phones are not just communication devices. They are sensors, cameras, wallets, identity systems, learning tools, business dashboards, AI clients, and personal command centres. The mobile platform is where cloud intelligence, local data, human attention, and real-world context all meet.
That is why Android development matters.
Mobile platform computing is not yesterday’s skill. It is the next enabler.
And for students who want to become serious developers in the AI age, Android Kotlin development offers something rare: a practical, hands-on way to learn user interface design, APIs, cloud integration, databases, secure architecture, edge-aware thinking, and AI-powered business logic — all inside one platform that people actually carry with them every day.
The future of AI will not live only in research labs, enterprise dashboards, or browser windows.
It will live in apps.
It will live in pockets.
And the developers who understand how to build those apps will be the ones who help bring intelligence to the edge of everyday life.
Android development has entered a new era.
PowerPoint from my first term class:
For years, students learned Android by wiring screens, handling buttons, passing data between Activities, fighting Gradle, debugging lifecycle problems, and eventually discovering that professional app development is not just “make the button work.” It is architecture. It is state. It is data. It is security. It is user experience. It is business logic.
Now Android Studio Panda 4 adds something new to that picture: AI is becoming part of the development environment itself.

Android Studio Panda 4 is now stable and includes major AI-assisted development features such as Planning Mode, Next Edit Prediction, Ask Mode, and Agent Web Search. Google describes Planning Mode as a way for the agent to create a detailed project plan before making code changes, while Next Edit Prediction is designed to suggest related edits even away from the current cursor position. (Android Developers)
That matters deeply for students.
Because the winning student of the next few years will not merely know how to “use AI.” The winning student will know how to build AI into applications.
And in Android Kotlin development, that means learning to place AI where it belongs: not as a toy chatbot pasted onto the side of the app, but as a properly designed service inside the business logic layer.
The new Android developer is an AI systems builder

Here is the shift I want my students to understand:
The app is no longer just a user interface connected to a database.
The modern app is a user interface connected to memory, reasoning, retrieval, workflow, and AI services.
That is why AI should now sit at the centre of your development efforts.
Not because AI writes all the code for you.
That is the cheap interpretation.
The serious interpretation is this:
AI changes the architecture of the application itself.
A modern Kotlin Android app may now include:
| Layer | Traditional purpose | AI-first purpose |
|---|---|---|
| Compose UI | Display screens and receive input | Let users interact with intelligent workflows |
| ViewModel | Manage state and events | Coordinate AI calls, loading states, retrieved context, and generated responses |
| Repository | Fetch and store data | Retrieve documents, notes, embeddings, and AI outputs |
| Business logic | Apply rules | Decide when to call Gemini, ChatGPT, Grok, or another model |
| Backend/Firebase | Authentication, storage, functions | Secure key management, model routing, AI service orchestration |
| MongoDB / vector store | Store documents | Support retrieval-augmented generation, or RAG |
This is where the gold is.
Students who learn this early can build apps that do more than display information. They can build apps that reason over information.
Panda 4: the IDE becomes a proactive workspace
Android Studio Panda 4 is important because it supports a more professional way of working with AI.
The three classroom-relevant features are:
1. Planning Mode
Planning Mode is the big one.
Instead of asking AI to immediately produce code, students can ask the agent to create an implementation plan first. This supports the teaching principle we have been developing for Android classes: deliberation before coding.
That lines up directly with our Planning Mode teaching module: students should read the specification, identify UI responsibilities, data responsibilities, navigation, state, and risks before touching the code. The teaching material emphasizes that Planning Mode is a “no code edits yet” phase where students produce a written implementation plan before implementation begins.
This is exactly how we stop students from treating AI as a vending machine.
The student should not say:
“Build me the app.”
The student should say:
“Here is the app specification. Create a plan showing screens, state, repositories, API calls, data models, error handling, and testing steps. Do not write code yet.”
That is the difference between AI dependency and AI-augmented engineering.
2. Next Edit Prediction
Next Edit Prediction, or NEP, is especially useful for Kotlin students because Android development often involves related changes across multiple files.
Change a data class, and you may need to update:
- a ViewModel
- a repository
- a mapper
- a Compose screen
- a test
- a Firebase DTO
- a serialization model
Google describes NEP as an evolution of code completion that anticipates edits away from the current cursor position, not just at the line where you are typing. (Android Developers)
For teaching, this is beautiful.
It helps students see that professional code is connected. A change in one file has consequences elsewhere. NEP becomes a kind of “codebase radar.”
3. Agent Web Search
Agent Web Search lets the Gemini agent pull current documentation for third-party libraries directly into Android Studio. Google’s release notes describe this as expanding Gemini beyond the Android knowledge base so it can fetch current reference material from the web for external libraries such as Coil, Koin, or Moshi. (Android Developers)
This matters because students often work from outdated tutorials.
In Android, outdated tutorials are not a small problem. They are a swamp. XML-era examples, old Gradle versions, deprecated APIs, abandoned libraries, and StackOverflow answers from three Android lifetimes ago all sit there waiting to ambush beginners.
Agent Web Search helps keep the student closer to current practice.
The real win: AI inside the app, not just inside the IDE
The IDE is only half the story.
The more important teaching move is this:
Use AI to build the app, then build AI into the app.
That means teaching students to integrate model APIs into Kotlin apps through a clean architecture.
Do not hardwire “Gemini” or “ChatGPT” all over the UI.
Instead, teach a stable abstraction:
interface AIClient {
suspend fun complete(prompt: String): String
}
Then you can have different implementations:
class GeminiClient : AIClient {
override suspend fun complete(prompt: String): String {
// Call Gemini through Firebase AI Logic
return "Gemini response"
}
}
class OpenAIClient : AIClient {
override suspend fun complete(prompt: String): String {
// Call OpenAI Responses API through backend or secure service
return "OpenAI response"
}
}
class GrokClient : AIClient {
override suspend fun complete(prompt: String): String {
// Call xAI Grok API through backend or secure service
return "Grok response"
}
}
This teaches students one of the most valuable professional patterns in AI application development:
Your app should depend on an AI capability, not on a single vendor.
Firebase AI Logic supports Gemini model access from mobile and web apps, including Kotlin and Java SDKs for Android. (Firebase) OpenAI’s platform exposes APIs for text, structured output, multimodal workflows, tools, and stateful interactions through the Responses API. (OpenAI Developers) xAI also provides API access for integrating Grok models into applications. (xAI Docs)
That means students can learn a vendor-neutral design:
Compose UI
↓
ViewModel
↓
AIUseCase / Business Logic
↓
AIClient interface
↓
Gemini / ChatGPT / Grok / other model provider
That is serious architecture.
That is employable knowledge.
Firebase as the RAD backbone for AI apps
Firebase is now one of the fastest ways to teach students how to build serious AI-enabled mobile apps without forcing them to become backend infrastructure engineers on day one.
Firebase AI Logic is designed to let developers build generative AI features into mobile and web apps using Gemini models, with Android support through Kotlin and Java SDKs. (Firebase) Firebase also provides a Gemini API template through Firebase Studio for building apps with the Gemini API pre-loaded. (Firebase)
For students, Firebase can serve as a RAD environment: Rapid Application Development.
It gives them a practical path to:
- authenticate users
- store app data
- call cloud functions
- manage AI access more safely
- avoid embedding raw API keys directly into the Android app
- connect app logic to Gemini-powered features
This is a major professionalism point.
One of the pitfalls in AI-assisted Android development is leaking sensitive data or keys to third-party APIs, or sending user data without proper masking and consent. Our AI-assisted Android pitfall guide explicitly flags weak privacy handling, bad key practices, and poor review/testing habits as recurring problems students must learn to avoid.
So the classroom message is simple:
Do not build “AI toy apps.”
Build AI apps with architecture, privacy, testing, and secure backend thinking.
Lab: Build an AI Study Coach with Android Studio Panda 4, Kotlin, Firebase, Gemini, and MongoDB RAG
Project theme
Students will build a simple AI-powered Android app called:
StudyForge AI
The app helps a student save study notes and then ask questions about those notes.
Example:
The student saves notes like:
Kotlin coroutines let us run asynchronous work without blocking the main thread.
Then the student asks:
Why should I not make network calls on the main thread?
The app retrieves relevant notes from MongoDB, sends them as context to an AI model, and returns a study explanation.
That gives students a practical introduction to RAG: Retrieval-Augmented Generation.
MongoDB Atlas Vector Search supports semantic search by storing vector representations of data and retrieving relevant documents for generative AI applications. (MongoDB) MongoDB’s own RAG tutorials show how to create vector search indexes, store embeddings, and retrieve relevant documents for LLM-powered applications. (MongoDB)
For a student lab, I would keep MongoDB on the backend side rather than embedding database credentials directly into the Android app. The Android app should call Firebase or a small backend endpoint, and that backend should talk to MongoDB.
That keeps the app cleaner and safer.
What students will build
The app will include:
| Feature | Purpose |
|---|---|
| Add study note | User saves short study notes |
| View saved notes | Compose displays a list |
| Ask AI | User asks a question |
| Retrieve context | Backend searches MongoDB for relevant notes |
| Generate answer | Gemini, ChatGPT, Grok, or another model answers using retrieved notes |
| Display answer | Compose UI shows the AI response |
Following current development trends, we showcase the new, Compose way of doing Android.
Compose puts your UI Design into code – NOT the old way of declarative xml.
The benefit? Lots – but – mainly, if you want to play in this space you need to get on board with Docker and especially the CI CD way of generating your apps directly from GIT. Git does Code. XML declaration files for UI – not so much.

Architecture
Android Kotlin App
↓
Jetpack Compose UI
↓
StudyCoachViewModel
↓
StudyCoachRepository
↓
Firebase Callable Function or HTTPS endpoint
↓
MongoDB notes collection + vector search
↓
AI model provider: Gemini / ChatGPT / Grok
↓
Answer returned to Android app
This is the key teaching point:
The Android app is not “the whole system.”
The Android app is the mobile front end of an AI-enabled system.
That is how modern apps increasingly work.
Step 1: Create the Android Studio Panda 4 project
- Open Android Studio Panda 4.
- Create a new project.
- Choose a Kotlin + Jetpack Compose project.
- Use the Gemini API Starter template where available.
- Run the starter app on an emulator.
Now pause.
Before coding, students must use Planning Mode.
Prompt:
I am building an Android Kotlin Jetpack Compose app called StudyForge AI.
The app lets users save short study notes, view them in a list, ask a question, retrieve relevant notes from a MongoDB-backed RAG service, and send the question plus retrieved notes to an AI model.
Create an implementation plan only. Do not write code yet.
Include:
- screens
- composables
- ViewModel state
- repository methods
- backend API calls
- data models
- loading and error states
- testing steps
Students should save the plan as part of the assignment.
This matches the teaching strategy from our earlier Planning Mode module: students should submit not only working code, but also the plan, prompts, AI responses, and their own edits to the plan.
Step 2: Create the core data model
Create a Kotlin data class:
data class StudyNote(
val id: String,
val text: String,
val createdAt: Long
)
Then create a second model for AI answers:
data class StudyAnswer(
val question: String,
val answer: String,
val sources: List<StudyNote>
)
Teaching note:
This is a good moment to use Next Edit Prediction. After changing the data model, students should watch how Android Studio suggests related updates in ViewModels, repositories, or UI files.
Step 3: Build the Compose screen
Create a simple Compose screen:
@Composable
fun StudyForgeScreen(
viewModel: StudyForgeViewModel = viewModel()
) {
val notes by viewModel.notes.collectAsState()
val newNote by viewModel.newNote.collectAsState()
val question by viewModel.question.collectAsState()
val answer by viewModel.answer.collectAsState()
val isLoading by viewModel.isLoading.collectAsState()
Column(modifier = Modifier.padding(16.dp)) {
Text("StudyForge AI")
OutlinedTextField(
value = newNote,
onValueChange = viewModel::onNewNoteChanged,
label = { Text("Add a study note") }
)
Button(onClick = viewModel::saveNote) {
Text("Save Note")
}
LazyColumn {
items(notes) { note ->
Text(note.text)
}
}
OutlinedTextField(
value = question,
onValueChange = viewModel::onQuestionChanged,
label = { Text("Ask a question") }
)
Button(onClick = viewModel::askQuestion) {
Text("Ask AI")
}
if (isLoading) {
Text("Thinking...")
}
if (answer.isNotBlank()) {
Text("AI Answer")
Text(answer)
}
}
}
This is not meant to be visually perfect.
It is meant to teach structure.
Students can improve the UI later.
Step 4: Create the ViewModel
class StudyForgeViewModel(
private val repository: StudyForgeRepository = StudyForgeRepository()
) : ViewModel() {
private val _notes = MutableStateFlow<List<StudyNote>>(emptyList())
val notes: StateFlow<List<StudyNote>> = _notes
private val _newNote = MutableStateFlow("")
val newNote: StateFlow<String> = _newNote
private val _question = MutableStateFlow("")
val question: StateFlow<String> = _question
private val _answer = MutableStateFlow("")
val answer: StateFlow<String> = _answer
private val _isLoading = MutableStateFlow(false)
val isLoading: StateFlow<Boolean> = _isLoading
fun onNewNoteChanged(value: String) {
_newNote.value = value
}
fun onQuestionChanged(value: String) {
_question.value = value
}
fun saveNote() {
val text = _newNote.value.trim()
if (text.isBlank()) return
val note = StudyNote(
id = UUID.randomUUID().toString(),
text = text,
createdAt = System.currentTimeMillis()
)
_notes.value = _notes.value + note
_newNote.value = ""
viewModelScope.launch {
repository.saveNote(note)
}
}
fun askQuestion() {
val currentQuestion = _question.value.trim()
if (currentQuestion.isBlank()) return
viewModelScope.launch {
_isLoading.value = true
_answer.value = repository.askAI(currentQuestion)
_isLoading.value = false
}
}
}
Teaching note:
Students must understand why the AI call runs inside viewModelScope.launch.
One of the common Android AI pitfalls is running inference or network calls on the main thread, causing freezes or ANRs. Our pitfall guide specifically recommends lifecycle-aware background work such as coroutines, WorkManager, and lifecycle-aware scopes for AI integration labs.
Step 5: Create the repository
class StudyForgeRepository(
private val apiClient: StudyForgeApiClient = StudyForgeApiClient()
) {
suspend fun saveNote(note: StudyNote) {
apiClient.saveNote(note)
}
suspend fun askAI(question: String): String {
return apiClient.askQuestion(question)
}
}
The repository keeps the ViewModel clean.
This is where students learn separation of concerns.
The UI should not know whether the answer came from Gemini, ChatGPT, Grok, or a future model that has not been invented yet.
Step 6: Connect to Firebase or backend endpoint
For teaching, keep this part simple.
The Android app calls:
POST /saveNote
POST /askQuestion
The backend handles:
- storing notes in MongoDB
- embedding the note
- retrieving relevant notes
- calling the selected AI model
- returning the answer
A simplified Android API client might look like:
class StudyForgeApiClient {
suspend fun saveNote(note: StudyNote) {
// Send note to Firebase function or backend endpoint
}
suspend fun askQuestion(question: String): String {
// Send question to Firebase function or backend endpoint
// Receive AI answer as String
return "AI answer will appear here"
}
}
In a production-quality version, students should use Retrofit, Ktor Client, Firebase Functions, or Firebase AI Logic depending on the teaching path.
Step 7: Backend RAG logic
The backend should perform this sequence:
Receive question
↓
Generate embedding for the question
↓
Search MongoDB for similar note embeddings
↓
Retrieve top 3–5 relevant notes
↓
Build prompt with retrieved notes
↓
Call Gemini / ChatGPT / Grok
↓
Return answer to Android app
Example prompt sent to the model:
You are a helpful study coach.
Use only the notes below as your source material.
If the answer is not present in the notes, say what is missing.
Student question:
{question}
Relevant notes:
{retrieved_notes}
Answer in clear student-friendly language.
This teaches students that RAG is not magic.
It is a workflow:
Store knowledge. Retrieve relevant knowledge. Add it to the prompt. Ask the model to answer from that context.
Step 8: Add a model switcher
Once the Gemini path works, students can add a provider setting:
enum class AIProvider {
GEMINI,
OPENAI,
GROK
}
Then the backend can route the request:
if provider == GEMINI → call Gemini
if provider == OPENAI → call OpenAI
if provider == GROK → call xAI Grok
This reinforces vendor-neutral architecture.
The lesson is not “learn one AI API.”
The lesson is:
Learn how AI APIs fit into application architecture.
That is a much more durable skill.
Student deliverables
Students submit:
- Screenshot of the running app
- Kotlin data models
- Compose screen
- ViewModel
- Repository/API client
- Planning Mode document
- AI prompts used
- Short reflection: “What did AI help with, and what did I have to verify?”
Assessment should not reward blind copying. Our prior Android teaching outline stresses that students should be graded on planning, AI prompt quality, edits, final code clarity, and their ability to critique AI output.
Common warnings for students
Do not put raw API keys in your Android app
Mobile apps can be inspected. Secrets embedded in APKs are not truly secret.
Use Firebase, backend functions, or secure server-side routing.
Do not paste private user data into prompts without thinking
AI apps must be designed with privacy awareness.
Do not accept generated code blindly
AI can create code that looks professional but contains lifecycle mistakes, outdated APIs, bad threading, or weak error handling.
Do not start with multi-agent complexity
For student projects, begin with one clean API call.
Then add retrieval.
Then add model switching.
Then add advanced orchestration.
In that order.
Conclusion: this is the moment for AI-enabled Android students
Android Studio Panda 4 is not just another IDE update.
It is a signal.
The development environment is becoming AI-assisted. The applications are becoming AI-enabled. The student who understands both sides of that equation has a real advantage.
This is why I am bringing this into my teaching practice.
Students should not graduate knowing only how to build static screens and simple CRUD apps. They should graduate understanding how to build apps where AI is part of the reasoning layer, the business logic layer, and the user value proposition.
The next wave of Android apps will not merely ask:
“What button did the user press?”
They will ask:
“What does the user need to understand, decide, retrieve, summarize, automate, or create?”
That is the opportunity.
Android Studio Panda 4 gives us the development environment.
Kotlin gives us the app architecture.
Firebase gives us the rapid backend.
MongoDB gives us the memory and retrieval layer.
Gemini, ChatGPT, Grok, and other models give us the reasoning engines.
Now the job of the student is to learn how to connect them intelligently.
Where the next generation of AI-enabled Android developers will win.
Show Me The Money: The Android Job Scene in Toronto
Let’s be blunt: you are not studying late at night and grinding through labs for a gold star sticker. You want a career, rent money, travel money, and—yes—some room for fun. Android development in Toronto/GTA can absolutely give you that.
Right now there are around a hundred Android‑focused roles and many more “mobile developer (iOS/Android)” postings in the Toronto area, across banks, consultancies, and product companies. That means real demand, not just hype. Companies like General Motors, TD, Tangerine, and dozens of startups and fintechs list Android and Kotlin as core skills for their mobile teams.glassdoor+5
The pay is serious even at the junior level. Glassdoor data for Toronto shows Android developers earning a typical base range of about 66,000–101,000 CAD, with an average around 88,000 CAD once you have some experience under your belt. PayScale puts an entry‑level Android developer (less than one year) around 51,000 CAD and early‑career (1–4 years) around 73,000 CAD in Toronto. In other words, if you put a couple of focused years into building skills and a portfolio, seeing numbers in the 70k–90k range is realistic—not a fantasy.glassdoor+1
As you level up, the ceiling gets much higher. Senior and staff Android roles in Toronto regularly advertise six‑figure salaries, with some postings showing 140,000–160,000 CAD or more for specialized Android work. Crypto, fintech, and big‑tech‑adjacent companies sometimes push even higher, with some data sources reporting averages above 120,000 CAD for experienced Android developers in the city.glassdoor+2
Why This Matters For Your Life (Not Just Your Resume)
Money isn’t everything, but it changes your options. A solid Android or mobile developer salary in Toronto can mean:
- Moving out sooner and choosing where you want to live, instead of taking whatever is cheapest.
- Paying off OSAP or other loans on your terms.
- Having the budget for travel, festivals, hobbies, and the kind of social life that makes your twenties and thirties memorable.
- The confidence that comes from being in demand—recruiters reach out to you, not the other way around.
Whether you’re a pragmatic young woman who wants independence and career security, or a young guy who wants enough income to impress himself and everyone around him, the equation is the same: tech skills that employers actually pay for. Android is one of those skills.
How George Brown Full Stack Leads Into These Jobs
Here’s the good news: the George Brown Full Stack program already teaches most of the building blocks Toronto employers are paying for.
Job ads for Android and junior mobile developers in the GTA consistently mention:
- Kotlin or Java, plus Android Studio, as the main programming environment.glassdoor+1
- REST APIs, JSON, and cloud platforms like Firebase or AWS.linkedin+1
- Databases and data modeling—skills you practice in your back‑end and SQL courses.payscale+1
- Version control with Git and working in agile teams.indeed+1
When you add a couple of focused Android projects on top of your Full Stack coursework—especially AI‑powered apps built in Android Studio Panda 4—you suddenly match the wish list in real Toronto job postings. The difference between “I took some courses” and “I can show you a working Android app that talks to a cloud backend and uses AI” is the difference between hoping for a job and walking into interviews with leverage.
Android + AI: An Edge In a Crowded Market
Toronto is competitive, which means you want something that makes your resume jump out of the pile. Right now, that “something” is clearly AI.
Employers are already asking mobile teams to integrate chatbots, recommendation systems, and smart in‑app assistants. When you can say, “I’ve built an AI‑powered Android app in Kotlin using Jetpack Compose, Firebase, and an external model like Gemini or ChatGPT,” you are no longer just another junior dev—you are the person who can help them ship the next generation of their product.
That’s exactly what we practice in my labs: Android Studio Panda 4, AI agents in the IDE, Firebase for secure backends, and MongoDB/RAG for intelligent data retrieval. It’s not just a cool classroom exercise; it’s training for the job descriptions that are live in Toronto right now.glassdoor+3
Bottom Line
If your goal is financial independence, career flexibility, and the ability to build things people actually use every day, Android development is a very pragmatic path—especially when combined with the George Brown Full Stack program. The market is there, the salary bands are real, and the skills you learn in class map directly to what Toronto employers are hiring for.
Toronto Android developer roles expect a mix of solid Kotlin/Android fundamentals, modern architecture, cloud/API skills, and collaboration practices.indeed+3
Core Android & Kotlin skills
- Strong Kotlin (and often some Java) with Android Studio and the Android SDK.indeed+2
- Experience building screens with Jetpack Compose and modern UI toolkits.indeed
- Understanding of Android components (activities, fragments, services), app lifecycle, and manifest configuration.indeed+1
- Familiarity with design patterns like MVVM, MVP, or Clean Architecture.indeed+2
Architecture, data, and networking
- Comfortable using coroutines and Flow or other reactive patterns for async work.indeed
- Consuming RESTful APIs and JSON, including authentication and error handling.indeed+2
- Local data storage with Room/SQLite or similar, and awareness of caching strategies.indeed+1
- Basic understanding of app performance, memory, and responsiveness on mobile devices.indeed+1
Testing, tooling, and DevOps habits
- Unit and UI testing using tools like JUnit and Espresso; some roles mention test automation and TDD.indeed+2
- Git proficiency (branches, pull requests, code review) and experience with CI/CD is commonly requested.indeed+1
- Ability to debug, troubleshoot crashes, and stay on top of security updates and vulnerabilities.indeed+1
Cloud, cross‑platform, and AI‑adjacent expectations
- Experience with cloud services such as Firebase or AWS (auth, analytics, serverless functions, etc.).indeed+2
- Many “mobile developer” postings want Android plus iOS or React Native, so awareness of Swift/Objective‑C or cross‑platform frameworks is a plus.linkedin+2
- Increasingly, job ads mention AI‑enhanced workflows or modern tooling, and some junior roles (e.g., at Intuit) explicitly reference AI‑assisted coding and UX‑focused Android development.talent
Professional and soft skills
- Ability to understand a mobile app end‑to‑end: from UI, through business logic, to backend integration.indeed
- Collaboration with designers, product owners, and other developers in agile teams, often using Jira/Confluence.indeed+1
- Clear written and verbal communication, plus a portfolio of apps or Play Store contributions is frequently listed as “strongly preferred.”indeed+2
If you can:
- build a Kotlin/Compose app,
- talk to a cloud backend (e.g., Firebase),
- integrate REST APIs,
- write basic tests, and
- work in Git with a team,
You are aligned with what Toronto Android postings ask for today.
https://ca.indeed.com/q-android-kotlin-l-toronto,-on-jobs.html?vjk=ef5b5150148db027












