This week was the first workshop of the semester. I learned to use the browser's Inspect tool to view page source, logged in to my cPanel site, and uploaded a website I made to the host. Before class I had already studied HTML basics on Codecademy—page structure and elements like headings, paragraphs, links, and images—so it felt rewarding to connect those concepts to real site operations.
I also began using CSS to control fonts, sizes, and layout. Changing styles with CSS is far more flexible and interesting than tweaking elements one by one in an editor.
Thinking back, I remembered learning web design in junior high with Microsoft FrontPage. That approach was easy for beginners but lacked the flexibility of modern HTML/CSS. My current challenge is not writing individual elements, but designing an overall layout: hierarchy, spacing, and visual flow need improvement. I plan to focus on layout techniques like Flexbox, Grid, and wireframing to make my pages cleaner and more attractive.
We also tried to set up a site with FileZilla but could not connect due to technical issues. This reminded me to prepare better for workshops: install and test required software in advance, read the assigned materials, and check common connection settings (account, host, port, passive/active transfer) when problems arise.
This week's workshop covered web scraping: using plugins or tools to extract webpage content and save it as structured tables. I practiced on a BBC iPlayer page with the WebScraper.io plugin, creating a sitemap and adding selectors to define the areas and data formats to scrape. After configuring selectors and navigation, data can be exported as CSV or Excel.
Web scraping is a practical skill: it speeds up market research, academic work, and competitor monitoring by collecting large amounts of structured data automatically. It can also help my job search by gathering job postings or company contacts from recruitment sites.
One class is only an introduction. My classmates and I ran into challenges like writing precise selectors, handling pagination and dynamic content, and cleaning scraped data. There are also legal and ethical considerations—avoid scraping sensitive or copyrighted material in bulk and respect site rules.
Next steps: practice scraping sites with different structures, deepen my knowledge of selectors, and learn tools for more advanced scraping.
This week we focused on questionnaire design. Our group's topic is student engagement with AI in higher education. Following the instructor's advice, we narrowed the target population from “university students” to “master's students” to make the study more focused. The survey covers discipline (STEM, business, social sciences, medicine, etc.), AI use cases and motivations, frequency of use, and views on AI-related academic integrity issues.
Designing a logical, analyzable questionnaire is challenging. You must consider respondents' language, clarity and neutrality of questions, each item's relevance to the study goals, and whether answers will produce comparable variables for analysis (consistent scales, non-ambiguous options). We also discussed question order, filter questions, and response formats (single choice, multiple choice, Likert scales).
Overall, these three weeks formed a clear learning chain—technical operation → data acquisition → data collection design. Weekly practice has been valuable, but I still need more hands-on experience and theoretical study to consolidate these skills.