Chuyển đến nội dung chính

Codex CLI For Data Workflow Automation: A Complete Guide

Master OpenAI's Codex CLI to automate data workflows. Learn to conduct EDA, build Python ETL pipelines, and generate tests directly from your local terminal.
14 thg 4, 2026  · 15 phút đọc

If you are a data professional, you probably manage many repetitive coding tasks every day. Things like profiling entirely new datasets, constructing data pipelines from scratch, or writing out data transformation tests manually. Stuff like this is necessary, but it eats up a lot of time. 

But what if your terminal could actually manage and write all of that repetitive boilerplate code for you, while you focus your energy on brainstorming and decision-making tasks? That is where OpenAI's Codex-CLI comes in. It is a highly capable AI coding agent that is built directly into your command line, and as we will see, it is great for streamlining data workflows.

In this tutorial, we will go over how data analysts and scientists can use Codex-CLI to speed up their most common day-to-day data tasks. We are going to cover everything from conducting initial exploratory data analysis to building full-fledged data pipelines, and even making automated tests for your transformations, all through the terminal itself.

If you want to know more about building agentic AI systems, I highly recommend enrolling in our AI Agent Fundamentals skill track, which covers everything you need to know.

What Is Codex CLI?

So, let’s first understand what Codex CLI is. At its core, Codex CLI is an open-source, terminal-based coding agent which is developed by OpenAI. 

It is built using the Rust programming language, which makes it quite fast and efficient. But the most important thing to understand is that it operates from right in your command line, which grants it the ability to read your files, edit your code, and even execute commands locally on the machine.

Codex CLI Architecture

How Codex CLI differs from ChatGPT for data tasks

While you might already be used to using the standard ChatGPT web interface for your work, Codex CLI is very different. When you use the web interface, the AI model is completely isolated from the environment where you work. 

With Codex CLI, the agent actually has direct access to your local file system. It can run Python scripts, look at the output or any errors that come back, and it maintains a full awareness of your entire project structure without you having to first explain everything to it.

Workflow Step / Feature

ChatGPT (Web Browser)

Codex CLI (Terminal)

Accessing Data

You have to manually open your CSV file and copy-paste a few rows of raw data into the chat to help it understand.

It can independently open and read your CSV file directly from your local file system.

Executing Code

You must manually copy the generated script, paste it into your local code editor, and then run it yourself.

It automatically writes the necessary Python script, runs it, and shows you the final output in the terminal itself.

Overall Experience

Involves a whole lot of very onerous back-and-forth, copying and pasting between windows.

Everything happens in one single, continuous, and seamless flow inside the terminal.

Now, of course, because the Codex agent has the power to directly edit your files and run commands on your machine, it comes with different approval modes to ensure you always stay in control. The three approval modes are:

  • Auto (default): Codex can work end-to-end inside your current project folder. If it needs to step outside that boundary or do something that involves the network, it will stop and ask first.
  • Read-only: Codex can inspect your project and suggest what to do, but it won’t touch files or execute anything until you approve the plan.
  • Full Access: Codex is no longer confined to the project directory and can operate across your machine, including network access, without pausing for confirmation. This is the one you only use when you fully trust the repo, and you know the task is safe.

When you are first starting out, I recommend beginning in Read-only, then moving up once you trust the workflow.

You can switch approval modes inside a running Codex session using /permissions. That’s the easiest way to move from Read-only to Auto once you’re comfortable.

/permissions

If you want to start Codex in a stricter mode from the beginning, you can set the sandbox and approval policy flags when launching it. For example, this starts in a conservative read-only setup that still prompts when needed.

codex --sandbox read-only --ask-for-approval on-request

Setting Up Codex CLI for Data Projects

There are a few prerequisites that you need to have so that you can properly follow along with the tutorial. 

  • Python 3.10 or higher installed on your machine
  • Basic familiarity with the terminal or command line
  • pip (Python’s package manager) installed to manage packages
  • Either a paid-tier (Plus, Pro, Team, Business, Enterprise) ChatGPT account or an OpenAI API key to access OpenAI’s models on top of which Codex is built

First things first, you need to install the Codex CLI on your machine. To do this, you can just open up your terminal and install the CLI globally on your system using this command:

npm install -g @openai/codex

The next step will be to authenticate your account, so the tool knows who you are. You can verify that everything is installed correctly and launch the agent for the very first time just by typing codex into your terminal. It will show you something like this:

Once you press enter, it will open a browser window where you will be prompted to log in via your ChatGPT account. Once logged in, you’ll be ready to use the tool.

If you don’t have a paid ChatGPT subscription and want to use an API key instead, there’s also the pay-per-use option. You can acquire a key from the OpenAI console.

Configuring your Python data environment

Before we start asking the AI to do data analysis, it is important that we configure our Python data environment properly. This is essential to do as Codex CLI operates inside whatever environment it is currently running in. So, if the agent needs to write a script that uses data science libraries like pandas, scikit-learn, or matplotlib, you need to make sure those libraries are installed and available for it to use.

We can do this by activating a Python virtual environment before we launch Codex. Here is a sample setup script with the exact commands that you can run in your terminal to create a virtual environment, install the necessary data packages, activate it, and then launch the agent:

python3 -m venv data_env
source data_env/bin/activate
pip install pandas scikit-learn matplotlib
codex

Creating an AGENTS.md file for data projects

One more important step in setting up your project is creating a file named AGENTS.md inside your main project folder. You can look at this file as a set of persistent instructions that the Codex agent reads automatically every single time it opens up your project. It tells the AI about how you want it to behave and how you want it to write code for this specific workspace.

For data work, we want to ensure the code the AI generates is clean, readable, and professional. So, here is a sample AGENTS.md file that is specifically tailored for a data project. You can simply create this file and paste this text inside:

# Data Project Guidelines


When writing Python code for this project, please strictly follow these rules:
- Enforce PEP 8 formatting standards for all Python code.
- Always use highly descriptive variable names. Do not use generic, lazy names like df, data, x, or y. Instead, use specific names like transaction_data or revenue_series.
- Prefer pandas best practices, such as using vectorized operations instead of iterating through rows.
- Generate clear, descriptive docstrings for every single function.
- Always include Python type hints for function arguments and return values.

Since this file will be read every time, independent of the task at hand, it is best practice to keep it concise and only focus on instructions that apply to every prompt. For more specific instructions, you should use skills instead. 

Using Codex CLI For Exploratory Data Analysis

Now, moving on to the actual data work, we are going to start with Exploratory Data Analysis, or EDA. As you probably already know, this is almost always the very most common starting point for literally any new data project you take on. Before you can build models or pipelines, you have to know what your data looks like. 

The beautiful thing here is that with the Codex CLI, a single, simple natural-language prompt can produce a full, working EDA script for you.

The scenario: For our examples today, let's imagine we are working with a realistic, synthetic dataset. Let's say we have an e-commerce dataset named transactions.csv sitting right there in our project folder. It is filled with realistic business data like order IDs, user IDs, purchase timestamps, and transaction amounts.

Profiling a dataset

When you get a new file like this, the first thing you want to do is profile it to understand its basic structure. Instead of writing the boilerplate pandas code yourself, you can literally just open up your terminal, where your Codex session is running, and type in a prompt exactly like this:

Profile the transactions.csv file. Show shape, dtypes, missing values, and summary statistics.

When you hit enter, Codex reads the first few lines of your transactions.csv file directly from your local file system. It will then generate a complete Python script to perform the profiling, and in "suggest" mode, it will ask if you want to run it. 

You will immediately see the exact shape of the data, the data types of your specific e-commerce columns, and exactly how many missing values you have to deal with (example shown below), all without writing a single line of code yourself.

Creating visualizations from natural language

Numbers in a terminal are great, but eventually, you need to actually see the data visually. You can generate surprisingly complex visualizations just by describing what you want in plain English.

For instance, if you want to get a birds-eye view of your e-commerce business, you could give Codex a prompt that looks like this:

Create a matplotlib dashboard with 3 subplots showing revenue by month, product categories ranked by sales, and order distribution by day of week.

That is a pretty complex request. But Codex will analyze both the prompt and your data file again, figure out how to group the dates and sum the revenue, create a step-by-step plan, and turn it into a robust matplotlib script to generate those exact subplots.

"Create a matplotlib dashboard with 3 subplots showing revenue by month, product categories ranked by sales, and order distribution by day of week."

Now, here is a really crucial point about working with AI agents like this: it is inherently an iterative process. When Codex suggests the first version of the visualization code, you should just go ahead and approve it to see what it looks like. 

Maybe the first version generates the chart, but you notice that the x-axis labels are overlapping and hard to read, or maybe the colors are a bit too bright. You don't have to open the script and fix the matplotlib parameters manually. 

You simply reply with a follow-up prompt, saying something like, "The labels on the bottom are overlapping, please rotate them by 45 degrees and make the legend colors softer." Codex will then refine the script, run it again, and give you the updated, polished dashboard.

Building a Data Pipeline With Codex CLI

So, once you have finished playing around with your data and doing that initial exploratory analysis, you are eventually going to need to move away from those messy, ad-hoc scripts. 

What you actually want to do is move towards building real, reproducible, modular code. In the data world, this usually means building an ETL (Extract, Transform, and Load) pipeline. It is the standard way to pull your data in, clean it up, and save the results for later use.

To show you how this works, we are going to use a very practical scenario. We want to ingest that same e-commerce transaction data from our CSV file, clean up any messy data we find, compute some business aggregations, and then save the final results into a clean, new file. 

Instead of writing all that boilerplate architecture yourself, you can use the Codex CLI to scaffold the entire thing from a high-level description.

Scaffolding the pipeline structure

The very first step is to get the actual project structure set up. A good data pipeline is broken down into separate files, so it is easy to read and maintain later on. You can just ask the Codex agent to do this heavy lifting for you. In your terminal, you would give it a prompt like this:

Create a project layout for an ETL pipeline. I need separate Python modules for extraction, transformation, and loading, plus a main entry point script to run them all.

Codex will go ahead and create those files right in your directory. If you look at your file tree after approving the action, you will see a clean, professional architecture that looks something like this:

etl_pipeline/
├──__init__.py
├── extract.py
├── transformation.py
└── loading.py
– run_etl.py

The reason Codex chooses this specific architecture is that it separates the concerns of your code. Your data-reading logic lives completely separate from your math and business logic, which is exactly how data engineers should structure their own work.

Writing transformation logic

Now, let’s get to describing the transformation that we want our data to undergo. In an ETL pipeline, the transformation logic is usually the hardest, but we can just prompt Codex to handle the specifics. Let's say we need to clean up some missing values and calculate exactly how much money each order made.

You can type a prompt directly into the CLI that says:

In transformation.py, write a function that takes the transactions data, drops any rows where the user ID is missing, and creates a new derived column called 'revenue' by multiplying the 'quantity' column by the 'unit_price' column.

Because Codex can read your transactions.csv file, it knows the real column names. It isn't just going to guess and write df['qty'] * df['price'] and hope for the best. It will look at your file, see that your columns are actually named quantity and unit_price, and write the exact, correct pandas code to make that script work.

Running and validating the pipeline

After the code is generated, the final step is to run the pipeline end-to-end to make sure it works. You can just tell Codex, "Run the run_etl.py script."

When it runs, you will see all the terminal output printing right there in front of you, and it might look like this:

ETL Pipeline Output

The new processed_transactions.csv should look something like this:

Processed transactions csv file

Now, in the real world, things break. Maybe there was a weird string value hiding inside a numeric column, causing a TypeError. If that happens, you don't need to panic or copy-paste the error into a web browser. Codex CLI will automatically catch that error, read the Python traceback, and often self-correct its own code by proposing a fix on the spot.

This really highlights the core iterative loop of working with an AI coding agent: 

  1. Give Codex a prompt
  2. Review the suggested plan
  3. Approve the code changes
  4. Inspect the terminal output together
  5. Refine it with a new prompt

It is a continuous, collaborative loop that builds working software much faster than typing it all out by hand.

Writing Tests for Data Transformations Using Codex CLI

Testing your code is absolutely critical so you don't accidentally break things in production, but it is still the one step that almost always gets skipped.

Writing tests is tedious, especially when you only want quick insights from a new dataset; stopping everything to write unit tests just feels like a massive chore. But having the Codex CLI right there in your terminal essentially removes that barrier.

Generating pytest tests from existing code

If you want to generate tests for the transformation code we just wrote in the previous section, you don't even have to leave your terminal or open up a blank file. We can use a standard Python testing framework like pytest. You can do this by giving Codex a simple prompt like this:

Write high-quality, maintainable pytest tests for the transform module. Test null handling, extreme edge cases like zeroes or negative values, type casting, and revenue calculation.

Codex will go back and look at the transformation.py file it created earlier. It reads your logic, understands what the functions are supposed to do, and then it generates a brand new test file for you. Below is what you might see in your terminal after Codex is done generating those tests. 

In my case, it generated a new test_transformation.py script inside a new tests folder, whose job is to check if the designated transformation functions did their job correctly.

pytest tests

Codex doesn't just write generic assertions, but actually creates very realistic, small synthetic data inputs (called fixtures) to feed into your functions to stress test them. It intentionally creates edge cases, like rows with entirely missing user IDs or negative purchase quantities, just to make sure your transformation logic handles those weird, broken scenarios robustly and correctly.

Data validation checks

Now, testing the Python code itself is one thing, but as data professionals, we also need to test the actual data that flows through that code. This is usually called data validation. You want to generate actual assertions that check the overall quality of the data itself before you hand it off to your stakeholders or load it into a dashboard.

You can demonstrate this by asking Codex to generate a specific data validation script. You just type in a prompt like:

Create a data validation script that runs after the very end of the pipeline. It should check that the schema matches our expectations, ensure the null-percentage for user_id is exactly 0%, and verify that all revenue values are greater than or equal to zero.

Codex will then output a dedicated validation script that acts as a final safety net for your project. You can easily configure this to run as a final post-step at the very end of your pipeline. 

That way, if the raw CSV data suddenly changes its structure tomorrow, or if a weird glitch causes negative revenue values to appear out of nowhere, this script will catch it and throw an error immediately. It ensures that your pipeline doesn't just silently pass bad data downstream to your business users.

data validation checks

Automating Repetitive Data Tasks With Codex CLI

Up until now, we have mostly been looking at how to use the Codex CLI interactively, typing back and forth. But for data professionals who really want to integrate this tool into their regular, everyday workflow, there are some more advanced usage patterns that can basically put your boring work on autopilot.

Converting Jupyter Notebooks to production scripts

Jupyter Notebooks are absolutely fantastic for playing around with data and exploring things initially, but they are pretty terrible when it comes time to actually run that code reliably in production. Usually, you have to spend hours manually copying cells, pasting them into Python files, and fixing all the weird global variable issues.

With Codex CLI, you can literally just point the agent at your notebook and ask it to do the heavy lifting for you. You just open your terminal and type a prompt exactly like this:

Refactor analysis.ipynb into a modular Python package with separate files for data loading, transformation, visualization, and a main.py entry point.

When you approve this, Codex reads the JSON structure of your notebook file, pulls out the actual Python code, ignores the random output logs, and reorganizes the whole thing. 

If you look at the before-and-after structures, they are vastly different. Before, you just had one giant analysis.ipynb file where everything was tangled together. 

After Codex is done, you will see a clean, professional folder with separate data_loader.py, transformer.py, and visualizer.py files (names might be different for you), all tied together neatly by a main.py script. It instantly bridges that gap between your messy exploratory data analysis and actual production-ready software engineering.

Using codex exec for non-interactive automation

Now, sometimes you don't actually want to sit there and interact with the chat interface at all. If you are building automated pipelines, like those automated checks that run right before you share your code with your team, you need the AI to just do its job in the background, completely on autopilot. 

That is exactly where the codex exec command comes into play. It is designed specifically to run Codex in scripts and non-interactive environments without asking you for permission every single step of the way.

To give you a practical example of how you might use this, let's actually run a quick test. We can use codex exec as a simulated CI/CD check to catch/validate bad data automatically.

Open your terminal and type this exact command:

codex exec --skip-git-repo-check "Read transactions.csv. Write and run a quick python script to check if the 'quantity' column contains any negative numbers. If it does, print 'DATA VALIDATION FAILED: Negative quantities detected.' If it is clean, print 'DATA VALIDATION PASSED'." 2> /dev/null

When you hit enter, Codex will run non-interactively. It won't open up the usual interactive chat interface, and approval behavior depends on the configured approval flags and defaults; you may still need to allow certain actions unless you disable approvals. For more information, I recommend reading the Codex Documentation.

It will quickly write the validation script, execute it against your local CSV file, and then spit out the final result directly to your standard terminal output as long as the directory is treated as trusted and approvals are configured to allow this. You should see an output printed to your console that looks something like this:

codex exec output

This statement has a lot of use cases. Imagine using this command as-is in a pre-commit hook or GitHub Actions workflow. If your pipeline ever encounters data that is missing a column, has NaN values, or any other unexpected behavior, Codex can catch it right then and there, without you having to write any of those PyTest and validation scripts manually by yourself.

Best Practices for Data Professionals Using Codex CLI

When you are using AI tools for data work, the way you interact with the agent completely changes the quality of the Python code you get back. Let's look at some best practices to make sure your workflow stays as smooth and professional as possible.

Writing effective prompts for data tasks

The very first thing to master is writing effective prompts. You can't just tell the AI to "clean the data" and expect perfect results. Here is how you should structure your requests:

  • Be specific: You have to be specific about the actual column names, the exact data types you want, and the expected output format you are looking for. For example, instead of a vague request, you should explicitly say something like, "cast the 'purchase_date' column to datetime and output a summarized CSV."

  • Reference files directly: Also, a really handy trick is to reference your files directly using the @ syntax in your prompt. If you type @transactions.csv, it forces Codex to read that specific file directly into its context right then and there. 

  • Break down complex tasks: Perhaps most importantly, you should always try to break tasks into smaller units rather than letting Codex act on one massive mega-prompt. Using Codex’s dedicated plan mode to first create a draft and then act on it step by step is the way to go for complex tasks.

If you want to take your prompting to the next level, I recommend taking our Prompt Engineering with the OpenAI API course.

When to use each approval mode

As we touched on earlier, the CLI has different approval modes, and knowing exactly when to use each one is important. Here is a guide for when you should be using which mode:

  • Read-only: This is what you should definitely use when you are still learning the tool, or when dealing with sensitive production data or unfamiliar tasks. It keeps you firmly in control throughout.
  • Auto (workspace): Once you are more comfortable, Auto is great for routine transformations and refactors in a project that is safely version‑controlled. Codex can edit files and run scripts inside the project folder, while still asking before doing anything risky outside that scope.
  • Full access: Reserve this for sandbox experimentation or one‑off, throwaway analysis where you care more about speed than safety. In this mode, Codex has broad access to your machine and will ask for fewer confirmations, so you should only use it with repos and tasks you fully trust.

Keeping your data workflows reproducible

Finally, keeping your data workflows reproducible is very important for any data professional. One of the biggest rules you should follow is to always run Codex inside an initialized Git repository. Because Codex is going to be writing and editing files on your machine, having Git tracking those changes means you can easily see exactly what the AI did and undo it if things go wrong.

You should also definitely make sure to commit that AGENTS.md file we created earlier right alongside your project code. That way, if another data scientist on your team clones your repository and opens Codex, the whole team benefits from the exact same coding standards and instructions. 

The same applies to any agent skills you have defined for individual tasks. For inspiration, check out our guide on over a hundred top agent skills for Codex and other agentic coding tools.

And if you are working on a heavy analysis over several days, you don't have to start over every morning. You can simply use the codex resume command in your terminal to continue multi-session data projects. It loads right back into your previous chat, completely without losing any of the valuable context of what you and the agent were doing yesterday.

If you are working on a heavy analysis over several days, you don’t have to start over every morning. You can simply use the codex resume command in your terminal to continue multi‑session data projects. It reopens your last Codex session in that project so you can pick up where you left off, with the previous conversation, plans, and file changes still in context (subject to normal model and history limits).

For more best practices in agentic coding, you can also check out our Claude Code Best Practices guide. While Claude Code and Codex differ, as we have pointed out in our Codex vs Claude Code comparison, many of the foundational concepts apply to Codex, too.

Conclusion

We set up Codex CLI specifically for your data work and your local Python environments. From there, we walked through generating initial exploratory data analysis scripts from scratch, building reproducible ETL data pipelines, writing automated transformation tests (that are skipped way too often), and finally exploring advanced ways to automate those super-repetitive daily data tasks. 

The most important part to remember is that we did absolutely all of this right from the command line, completely without jumping back and forth to a web browser. Codex CLI effectively bridges that frustrating gap between messy, exploratory data analysis and actual, production-quality data engineering. 

If you’re interested in learning how to build a more complex agent with Codex CLI, I recommend you give our Codex CLI MCP Tutorial. It walks you through the process of creating a financial portfolio dashboard agent.

Codex CLI For Data Analytics FAQs

What is Codex CLI, and how is it different from ChatGPT for data work?

Codex CLI runs inside your terminal, so it can directly read your local project files, write or refactor scripts, and run commands to show real outputs and errors. ChatGPT in a browser is usually disconnected from your working directory, so you end up copy-pasting data, code, and tracebacks back and forth.

Can Codex CLI generate exploratory data analysis (EDA) scripts from a CSV automatically?

Yes. If the CSV is in your project folder, you can prompt Codex to profile columns, check missing values, compute summary statistics, and generate matplotlib charts. The key is to point it to the file explicitly so it reads the actual schema and uses the real column names instead of guessing.

How do you use Codex CLI to build an ETL pipeline for a dataset?

A reliable workflow is to ask Codex to scaffold a simple pipeline structure first (extract, transform, load), then implement transformations based on your rules, then run the pipeline and fix issues using the error tracebacks. You get the most consistent results when you keep the transformation logic modular and make Codex run the scripts, so you see real outputs, not hypothetical ones.

Can Codex CLI write pytest tests for data transformations and validation checks?

Yes. Codex can generate pytest tests that cover null handling, type casting, edge cases, and formula checks like revenue calculations. It can also create a separate validation script that enforces schema expectations and basic data quality rules after the pipeline runs, which helps catch silent failures and drift.

What are the best practices for using Codex CLI safely on real data projects?

Start in a conservative approval mode until you trust how it behaves, and keep your work inside a version-controlled repo so every change is reviewable and reversible. Be specific in prompts, reference the exact files you want it to read, and avoid giving it broad instructions like “clean the data” without defining what clean means for your use case.


Nikhil Adithyan's photo
Author
Nikhil Adithyan
LinkedIn
A hustler striving towards making accessible financial analytics tools and a marketer helping fintech companies expand their reach and visibility.
Currently working on building two ventures:
- BacktestZone, a no-code platform to backtest technical trading strategies
- Scriptonomy, a FinTech-focused marketing agency
Chủ đề

AI Courses

Tracks

Cơ bản về Trợ lý Trí tuệ Nhân tạo

6 giờ
Khám phá cách các tác nhân trí tuệ nhân tạo (AI) có thể thay đổi cách làm việc của quý vị và mang lại giá trị cho tổ chức của quý vị!
Xem chi tiếtRight Arrow
Bắt đầu khóa học
Xem thêmRight Arrow
Có liên quan

Tutorials

OpenAI Codex CLI Tutorial

Learn to use OpenAI Codex CLI to build a website and deploy a machine learning model with a custom user interface using a single command.
Abid Ali Awan's photo

Abid Ali Awan

Tutorials

OpenAI's Codex: A Guide With 3 Practical Examples

Learn what OpenAI's Codex is and how to use it inside ChatGPT to perform coding tasks on a GitHub repository.
Aashi Dutt's photo

Aashi Dutt

Tutorials

Codex CLI MCP Tutorial: Building a Portfolio Dashboard Agent

Break your AI coding agent out of its sandbox. Discover how to use Codex CLI and MCP to connect LLMs to live data, external tools, and multi-agent workflows.
Nikhil Adithyan's photo

Nikhil Adithyan

Tutorials

GPT-5.2 Codex Tutorial: Build a Data Pipeline in VSCode

Build a data engineering MVP with GPT-5.2 Codex. A step-by-step VSCode tutorial covering agentic coding, Python, DuckDB, and Streamlit workflows.
Abid Ali Awan's photo

Abid Ali Awan

Tutorials

Claude Code CLI: Command-Line AI Coding for Real Developer Workflows

Claude Code CLI allows developers to integrate AI-powered coding assistance into everyday terminal workflows. This guide walks through installation, authentication, core commands, and real-world workflows for analyzing and improving codebases.
Vikash Singh's photo

Vikash Singh

Tutorials

GPT-5.1 Codex Guide With Hands-On Project: Building a GitHub Issue Analyzer Agent

In this GPT-5.1-Codex tutorial, you’ll transform GitHub issues into real engineering plans using GitHub CLI, FireCrawl API, and OpenAI Agents.
Abid Ali Awan's photo

Abid Ali Awan

Xem thêmXem thêm