Lamalama.
PricingDocs
Sign inGet started
← Docs

Best Practices

How to talk to the agent, write good prompts, and get reliable results.

Understanding the three modes

How you phrase your request determines what the agent does. There are three distinct modes:

"Test the login page"

The agent opens the page in your browser, interacts with elements, and verifies behavior in real time.

"Write a test for login"

The agent generates automated test code (Playwright, Cypress, or Selenium) and saves it to your project.

"Write a test case for login"

The agent creates a structured document with manual steps, preconditions, and expected results.

Be explicit about which mode you want. Saying "test X" and "write a test for X" produce very different results.

Writing good prompts

The agent understands natural language. No special syntax needed. But how you phrase things matters:

Be specific about what to do

Tell the agent exactly what steps to take and what you expect to see. Vague instructions lead to guesswork.

Good
Go to /settings, click "Change Password", enter "OldPass1" in the current password field, enter "NewPass1" in the new password field, click Save, and verify a success toast appears.
Too vague
Test the change password feature.

Include test data

Give the agent the actual values to type. Don't make it guess usernames, passwords, or form inputs.

Good
Fill in the registration form with name "Jane Doe", email "jane@test.com", password "Secure123!", and submit.
Missing data
Fill in the registration form and submit it.

State what success looks like

Tell the agent what to verify after an action. This is how it knows the test passed.

Good
After login, verify the URL changes to /dashboard and the text "Welcome back" is visible.
No verification
Log in to the app.

Describe elements by what you see

Refer to elements by their visible text or label, not by HTML tags or CSS classes. The agent identifies elements by what's on screen, not the underlying markup.

Good
Click the "Get Started" button at the top of the page.
Don't use selectors
Click the .btn-primary element in div.hero-section.

Structuring complex tests

For multi-step flows, break your prompt into a clear numbered sequence. The agent follows instructions top to bottom.

Example: E-commerce checkout flow

1. Go to /products and add the first product to cart
2. Open the cart and verify the item count shows 1
3. Click checkout, fill in shipping: "123 Main St", "New York", "10001"
4. Use test card 4242424242424242, expiry 12/28, CVC 123
5. Submit the order and verify the confirmation page shows "Order confirmed"

If a test is very long (10+ steps), consider splitting it into separate tests. Shorter tests are easier to debug when something fails.

What the agent already knows

When you send your first message, the agent automatically receives context about your environment:

  • Your project - name, test framework (Playwright, Cypress, or Selenium), and platform
  • Current browser page - the URL and title of whatever's open in Chrome
  • Existing knowledge - anything the agent has previously learned about the current page

You don't need to repeat what's already visible. If you're on the login page, just say "test the login form" - the agent already knows which page you're on.

Choosing the right model

Lama offers multiple AI models. Each has different strengths:

Sonnet 4.6

Best all-rounder. Fast, capable, and cost-effective. Use this for most testing tasks.

Recommended

Haiku 4.5

Fastest and cheapest. Good for simple, well-defined tasks like filling forms or quick navigation checks.

Budget

Opus 4.6

Most capable. Best for complex multi-step flows, edge cases, and when you need thorough exploration.

Advanced

Working with the agent

Give context about non-obvious flows

The agent can see your page, but it doesn't know your app's business logic. If your app has hidden flows (like a confirmation modal, tabbed settings, or a two-step form), mention them upfront.

The settings page has tabs. Click the "Security" tab first, then you'll see the password fields.

Correct and redirect

If the agent takes a wrong turn, just tell it what to do differently. You don't need to start over. The agent processes corrections as follow-up instructions.

That's the wrong button. Click the "Save Changes" button at the bottom of the form instead.

Use follow-up messages

You can send messages while the agent is working. These get queued and processed when the agent reaches a natural pause. Use them to add context, correct course, or extend the test.

Use Plan Mode for exploration

When you're not sure exactly what to test, enable Plan Mode. The agent will explore your app freely, then present a structured test plan for your approval before making any changes.

What the agent is good at

The agent adapts its approach based on what you ask. It has deep expertise in:

Forms and validation, multi-step workflows (checkout, signup, onboarding), exploratory testing and bug hunting, visual and responsive layout checks, accessibility (WCAG), security testing (XSS, injection, auth bypass), API testing, and debugging failing tests.

You don't need to configure anything. Just describe what you want to test and the agent applies the right approach automatically.

The agent remembers your app

As the agent interacts with your app, it gets smarter. It remembers page layouts, form fields, navigation paths, and quirks it discovered. Repeat visits are faster because it already knows what to expect.

You can also teach the agent things it can't discover on its own from the Knowledge panel - like login credentials for different user roles or business rules. See Knowledge & Learning for more.

Common pitfalls

  • Don't use CSS selectors in prompts - describe elements by their visible text or role, not their class names or HTML tags
  • Don't confuse "test" with "write a test" - saying "test X" means interact and verify in the browser; "write a test for X" means generate code
  • Don't skip the extension - the agent needs the Chrome extension to see and control your browser
  • Don't assume state - always start from a known page or URL rather than assuming the agent knows where it is
  • Don't add extra requirements - the agent does exactly what you ask, no more. If you want validation testing, say so explicitly

Need help? Contact us at hi@lamaqa.com