合肥生活安徽新聞合肥交通合肥房產(chǎn)生活服務(wù)合肥教育合肥招聘合肥旅游文化藝術(shù)合肥美食合肥地圖合肥社保合肥醫(yī)院企業(yè)服務(wù)合肥法律

        代做 158.755、代寫 java/Python 編程
        代做 158.755、代寫 java/Python 編程

        時(shí)間:2025-05-02  來源:合肥網(wǎng)hfw.cc  作者:hfw.cc 我要糾錯(cuò)



        158.755-2025 Semester 1
        Massey University
        Project 3
          Deadline: Evaluation:
        Late Submission: Work
        Purpose: Project outline:
        Submit by midnight of 15 May 2025. 25% of your final course grade.
        See Course Guide.
        This assignment may be done in pairs. No more than two people per group are allowed. Should you choose to work in pairs, upon submission of your assignment.
        Learning outcomes 1 - 5 from the course outline.
                  Kaggle is a crowdsourcing, online platform for machine learning competitions, where companies and researchers submit problems and datasets, and the machine learning community compete to produce the best solutions. This is a perfect trainings ground for real-world problems. It is an opportunity for data scientists to develop their portfolio which they can advertise to their prospective employers, and it is also an opportunity to win prizes.
        For this project, you are going to work on a Kaggle dataset.
        You will first need to create an account with Kaggle. Then familiarise yourself with the Kaggle platform.
        Your task will be to work on a competition dataset which is currently in progress. While you will be submitting your solutions and appearing the Kaggle Leaderboard, this project will be run as an in-class competition. The problem description and the dataset can be found here https://www.kaggle.com/competitions/geology-forecast-challenge- open/overview
        Note, this dataset and the overall problem is challenging. You will be trying to solve the problem with the algorithms and approaches that we have learned so far being able to submit a new solution up to 5 times each day; however, your solutions will be constrained in terms the effectiveness of the final solutions that you can produce – but it will all be a valuable learning experience nonetheless.
        The competition is the Geology Forecast Challenge, which is a supervised classification problem where the task is to predict the type of geological material that a tunnel boring machine (TBM) will encounter ahead in the rock face.
        What is being predicted? You are predicting the rock class label (e.g. “Shale,” “Sandstone,” “Clay,” etc.), which represents the type of ground material at specific positions ahead of the tunnel boring machine.
        What does the data represent? The input features are sensor readings collected from the TBM during its operation, including measurements like thrust force, penetration rate, torque, advance rate, and more. These are time series of machine telemetry that reflect how the TBM interacts with the geological material. The labels (target values) represent ground truth rock types observed during the boring process.
        Task:
        Your work is to be done using the Jupyter Notebook (Kaggle provides a development/testing environment), which you will submit as the primary component of your work. A notebook template will be provided for you showing which information you must at least report as part of your submission.
        Your tasks are as follows:
        1. You will first need to create an account with Kaggle.
        2. Then familiarise yourself with the Kaggle platform.
        3. Familiarise yourself with the submission/testing process.
        4. Download the datasets, then explore and perform thorough EDA.
        5. Devise an experimental plan for how you intend to empirically arrive at the most accurate solution.
        6. Explore the accuracy of kNN for solving the problem and use the scores from your kNN for the class
        competition.
        7. Explore scikit-learn (or other libraries) and employ a suite of different machine learning algorithms not yet
              covered in class and benchmark against kNN performances.
        1

         158.755-2025 Semester 1 Massey University
        8. Investigate which subsets of features are effective, then build solutions based on this analysis and reasoning.
        9. Devise solutions to these machine learning problems that are creative, innovative and effective. Since much of
        machine learning is trial and error, you are asked to continue refine and incrementally improve your solution. Keep track of all the different strategies you have used, how they have performed, and how your accuracy has improved/deteriorated with different strategies. Provide also your reasoning for trying strategies and approaches. Remember, you can submit up to four solutions to Kaggle per day. Keep track of your performance and consider even graphing them.
        10. Take a screenshot of your final and best submission score and standing on the Kaggle leader-board for both competitions and save that as a jpg file. Then embed this jpg screenshots into your Notebooks, and record your submission scores on the class Google Sheet (to be made available on Stream) where the class leader-boards will be kept.
        11. If you are working in pairs, you must explain in the notebook at the in in the Appendix, what was the contribution that each person made to the project.
        The Kaggle platforms and the community of data scientists provide considerable help in the form of ‘kernels’, which are often Python Notebooks and can help you with getting started. There are also discussion fora which can offer help and ideas on how to go about in solving problems. Copying code from this resource is not acceptable for this assignment. Doing so can be regarded as plagiarism, and can be followed with disciplinary action.
        Marking criteria:
        Marks will be awarded for different components of the project using the following rubric:
        Component Marks Requirements and expectations
               EDA
            5
           - Breadth: summary stats, class balance, missing‐value and outlier checks, chainage/time trends.
        - Visuals: histograms, boxplots, correlation heatmaps, time‐series etc.
        - Preparation: imputation or removal of missing data, outlier treatment,
        clear rationale where needed.
        - Narrative: concise markdown explaining findings and guiding the
        modeling choices.
          kNN classification
          30
         - Baseline & Tuning: various values of k and different distance metrics must be benchmarked; report CV mean ± std and final test accuracy and the custom metric used in the competition.
        - Leakage Control: ensure no data leakage happens.
        - Presentation: table of results (e.g. k vs. accuracy/suitable metric), e.g. plot
        of accuracy vs. k, and confusion matrix if appropriate.
        - Interpretation: discuss under-/over-fitting as k varies, and justify your
        chosen k.
        - Leaderboard: only these k-NN results go into the class Google
        Sheet.
           Classification Modeling (Other Algos)
           25
          - Model Diversity: at least three algorithm families (e.g. tree-based, linear, kernel); brief rationale for each.
        - Tuning: grid or randomized search with CV; report best hyperparameters.
        - Comparison Table: side-by-side metrics (accuracy, precision/recall
        macro-avg, train time).
        - Interpretation: which outperform k-NN and why.
        - Note: these results inform your analysis and acquire scores for this
        component only but are not entered into the class leaderboard.
          Analysis
            20
           - Design Clarity: presentation and design of all your experiments
        - Cross-Validation: choice of testing strategies of all your experiments
        - Feature Selection: robustness in feature analysis and selection
        - Engineered Features: at least one new feature with before/after
        performance across all your experiments.
        - Data-Leakage Prevention: explicit note on where and how you guard
        against leakage.
         2

         158.755-2025 Semester 1
        Massey University
            Kaggle submission score
        20
        Successful submission of predictions to Kaggle, listing of the score on the class leader-board and position on the class leader-board based ONLY ON THE kNN models.
        The winning student will receive full marks. The next best student will receive 17 marks, and every subsequent placing will receive one less point, with the minimum being 10 marks for a successful submission.
        An interim solution must be submitted by May 1 and the class leader board document (this Google Sheet link is below) must be updated. This will constitute 10 marks. If this is not completed by this date, then 10 marks will be deducted from the submission score. For this, you must submit a screenshot of your submission date and score.
        Use of cluster analysis for exploring the dataset.
        Bonus marks will be awarded for exceptional work in extracting additional features
        from this dataset and incorporating them into the training set, together with the comparative analysis showing whether or not they have increased predictive accuracy.
          Reading Log
            PASS
           - The compiled reading logs up to the current period.
        - The peer discussion summaries for each week.
        - Any relevant connections between your readings and your analytical work
        in the notebook. If a research paper influenced how you approached an implementation, mention it.
         BONUS MARKS
        Cluster analysis Additional feature extraction
        Google Sheets link url:
        max 5 max 5
                     https://docs.google.com/spreadsheets/d/1CxgPKnIwzakbmliKiz1toatGz45HFQynaLh54RRU2lo/edit?usp=sharing
        Hand-in: Zip-up all your notebooks, any other .py files you might have written as well as jpgs of your screenshots into a single file and submit through Stream. Also submit your reading log and extract a pdf version of your notebook and submit this alongside your other files. If, and only if Stream is down, then email the solution to the lecturer.
        Guidelines for Generative AI Use on Project 3
        In professional practice, AI tools can accelerate workflows. At university, our priority is your own skill development—data intuition, experimental design, critical interpretation, and reproducible code. To support learning without undermining it, you may use generative AI only in a Planning capacity and as described below. Any other use is prohibited.
        Permitted Uses
        You may consult AI to:
        1. Clarify Concepts & Theory
        o Background on algorithms, metrics, or data-science principles.
        ▪ “How does k-NN differ from logistic regression?”
        ▪ “What are common sources of data leakage in time-series classification?”
        2. Plan & Critique Experimental Design
        o Feedback on your pipeline, methodology, or evaluation strategy—without generating
        code.
        ▪ “Does stratified vs. time-aware CV make sense for TBM data?” ▪ “What should I watch for when scaling sensor readings?”
        3. Troubleshoot & Debug
        o High-level debugging hints or explanations of error messages—provided you write and
         3

         158.755-2025 Semester 1 Massey University
        test the code yourself.
        ▪ “Why might my MinMaxScaler produce constant features?”
        ▪ “What causes a ‘ValueError: Found input variables with inconsistent numbers
        of samples’?”
        4. Explore Visualization Ideas
        o Suggestions for effective plots or comparison layouts—without copying generated code or images.
        ▪ “How best to show feature-importance rankings in a table or chart?”
        ▪ “What are clear ways to compare accuracy vs. k in k-NN?” 5. Engage Critically with Literature
        o Summaries of academic methods or alternative interpretations—integrated into your own reading log.
        ▪ “What are alternatives to ANOVA F-tests for univariate feature selection?” ▪ “How do researchers validate time-series classifiers in engineering?”
        Prohibited Uses You must not:
        • Paste AI-generated code or snippets directly into your notebook.
        • Prompt AI to solve assignment tasks step-by-step.
        • Paraphrase AI outputs as your own original work.
        • Submit AI-generated analyses, interpretations, or visualizations without substantial
        independent development.
        If you have any questions or concerns about this assignment, please ask the lecturer sooner rather than closer to the submission deadline.


        請加QQ:99515681  郵箱:99515681@qq.com   WX:codinghelp

        掃一掃在手機(jī)打開當(dāng)前頁
      1. 上一篇:代做 ECE391、代寫 Python/java 程序語言
      2. 下一篇:返回列表
      3. 無相關(guān)信息
        合肥生活資訊

        合肥圖文信息
        出評 開團(tuán)工具
        出評 開團(tuán)工具
        挖掘機(jī)濾芯提升發(fā)動(dòng)機(jī)性能
        挖掘機(jī)濾芯提升發(fā)動(dòng)機(jī)性能
        戴納斯帝壁掛爐全國售后服務(wù)電話24小時(shí)官網(wǎng)400(全國服務(wù)熱線)
        戴納斯帝壁掛爐全國售后服務(wù)電話24小時(shí)官網(wǎng)
        菲斯曼壁掛爐全國統(tǒng)一400售后維修服務(wù)電話24小時(shí)服務(wù)熱線
        菲斯曼壁掛爐全國統(tǒng)一400售后維修服務(wù)電話2
        美的熱水器售后服務(wù)技術(shù)咨詢電話全國24小時(shí)客服熱線
        美的熱水器售后服務(wù)技術(shù)咨詢電話全國24小時(shí)
        海信羅馬假日洗衣機(jī)亮相AWE  復(fù)古美學(xué)與現(xiàn)代科技完美結(jié)合
        海信羅馬假日洗衣機(jī)亮相AWE 復(fù)古美學(xué)與現(xiàn)代
        合肥機(jī)場巴士4號(hào)線
        合肥機(jī)場巴士4號(hào)線
        合肥機(jī)場巴士3號(hào)線
        合肥機(jī)場巴士3號(hào)線
      4. 短信驗(yàn)證碼 酒店vi設(shè)計(jì)

        中文日韩亚洲欧美制服| 精品露脸国产偷人在视频7| 国产乱人伦偷精品视频AAA| 国产69精品久久久久9999| 日韩精品午夜视频一区二区三区| 国产精品久久久久久影视| 精品国产系列在线观看| 国产精品亚洲AV三区| 亚洲乱码日产精品一二三| 国产高清在线精品二区一| 久久久久久久久久久精品尤物| 午夜精品视频在线观看| 久久精品国产99久久99久久久| 麻豆精品久久久一区二区| 国精品午夜福利视频不卡| 国产成人精品一区二区三区无码| 国产亚洲精品无码拍拍拍色欲 | 久久久久久无码国产精品中文字幕| 免费精品国自产拍在线播放 | 亚洲国产精品VA在线看黑人| 亚洲日韩精品射精日| 成人区人妻精品一区二区不卡视频 | 久久99精品视频| 日韩精品一区二区三区四区| 亚洲精品无码午夜福利中文字幕| 真实国产乱子伦精品一区二区三区| 国模和精品嫩模私拍视频| 窝窝午夜看片国产精品人体宴| 久久久久无码精品| 国产中文在线亚洲精品官网| 国产在线精品观看免费观看| 日本精品VIDEOSSE×少妇| 亚洲精品国产品国语在线| 久久精品一区二区国产| 日韩精品免费在线视频| 亚洲AV日韩精品久久久久久| 亚洲日本精品一区二区| 55夜色66夜色国产精品| 精品国产乱码一区二区三区| 无码成人精品区在线观看| www国产精品内射老熟女|