Chang Shen
2026
PropGenie: A Multi-Agent Conversational Framework for Real Estate Assistance
Chang Shen | Shaozu Yuan | Kuizong Wu | Long Xu | Meng Chen
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 3: System Demonstrations)
Chang Shen | Shaozu Yuan | Kuizong Wu | Long Xu | Meng Chen
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 3: System Demonstrations)
In this paper, we present PropGenie, a novel multi-agent framework based on large language models (LLMs) to deliver comprehensive real estate assistance in real-world scenarios. PropGenie coordinates eight specialized sub-agents, each tailored for distinct tasks, including search and recommendation, question answering, financial calculations, and task execution. To enhance response accuracy and reliability, the system integrates diverse knowledge sources and advanced computational tools, leveraging structured, unstructured, and multimodal retrieval-augmented generation techniques. Experiments on real user queries show that PropGenie outperforms both a general-purpose LLM (OpenAI’s o3-mini-high) and a domain-specific chatbot (Realty AI’s Madison) in real estate scenarios. We hope that PropGenie serves as a valuable reference for future research in broader AI-driven applications.
2022
Surfer100: Generating Surveys From Web Resources, Wikipedia-style
Irene Li | Alex Fabbri | Rina Kawamura | Yixin Liu | Xiangru Tang | Jaesung Tae | Chang Shen | Sally Ma | Tomoe Mizutani | Dragomir Radev
Proceedings of the Thirteenth Language Resources and Evaluation Conference
Irene Li | Alex Fabbri | Rina Kawamura | Yixin Liu | Xiangru Tang | Jaesung Tae | Chang Shen | Sally Ma | Tomoe Mizutani | Dragomir Radev
Proceedings of the Thirteenth Language Resources and Evaluation Conference
Fast-developing fields such as Artificial Intelligence (AI) often outpace the efforts of encyclopedic sources such as Wikipedia, which either do not completely cover recently-introduced topics or lack such content entirely. As a result, methods for automatically producing content are valuable tools to address this information overload. We show that recent advances in pretrained language modeling can be combined for a two-stage extractive and abstractive approach for Wikipedia lead paragraph generation. We extend this approach to generate longer Wikipedia-style summaries with sections and examine how such methods struggle in this application through detailed studies with 100 reference human-collected surveys. This is the first study on utilizing web resources for long Wikipedia-style summaries to the best of our knowledge.