What worked well
AI significantly accelerated our workflow in several concrete ways.
For early drafts, tools like ChatGPT and Deep Seek helped us quickly
generate multiple versions of section descriptions, allowing the team to compare different tones and
structures before settling on a final direction. When polishing language, AI was particularly useful for
rephrasing ambiguous sentences, fixing grammatical errors, and ensuring consistent
terminology across the portfolio.
For visual content, Doubao and NotebookLM enabled us to generate concept
images and colour mood boards without spending excessive time searching for stock assets. This allowed us
to focus more attention on the core design logic and user research.
In the development phase, Codex helped complete repetitive code patterns and suggest
debugging fixes, which reduced manual coding time.
What we learned
The most important lesson was that AI should support the team rather than replace human
thinking. While AI-generated suggestions were often helpful, they were never perfect. For
example, AI-generated text sometimes sounded polished but lacked accuracy or missed important context
from our actual user research. Similarly, AI-generated images often required manual adjustments
(e.g., removing watermarks via Doubao, adjusting colours, or cropping) to fit our design
requirements.
We also learned that effective AI use requires clear prompting and iterative
refinement. The first output was rarely usable, so we needed to test different prompts, compare
results, and combine the best parts from multiple generations. Every AI-generated output - whether code,
text, or image - was reviewed, edited, and approved by at least one team member before inclusion.
Limitations we encountered
Despite its benefits, AI had clear limitations. Image generation tools sometimes produced visuals with
incorrect details or unnatural compositions, requiring additional editing or regeneration. Code
generation tools occasionally introduced logic errors or used outdated syntax, which meant we had to
manually debug and verify every code snippet. Additionally, AI-generated text could feel generic or
overly formal, so we always adapted it to match our team's authentic voice.
Our approach to responsible AI use
To ensure academic integrity and follow the course policy, we adopted a structured approach:
- AI was never used to generate core design logic, user research findings, or evaluation conclusions - these remained entirely our original work.
- Every AI-generated image includes a caption such as "Image generated via Doubao".
- All AI tools used are documented in the AI Tools table and reference list.
- We maintained an /ai-logs folder in the system repository recording key prompts used during development.
Overall, all AI-generated content - code, images, and text - was reviewed, edited, and
approved by the team before inclusion.