合肥生活安徽新聞合肥交通合肥房產(chǎn)生活服務(wù)合肥教育合肥招聘合肥旅游文化藝術(shù)合肥美食合肥地圖合肥社保合肥醫(yī)院企業(yè)服務(wù)合肥法律

        代做 158.755、代寫 java/Python 編程
        代做 158.755、代寫 java/Python 編程

        時間:2025-05-02  來源:合肥網(wǎng)hfw.cc  作者:hfw.cc 我要糾錯



        158.755-2025 Semester 1
        Massey University
        Project 3
          Deadline: Evaluation:
        Late Submission: Work
        Purpose: Project outline:
        Submit by midnight of 15 May 2025. 25% of your final course grade.
        See Course Guide.
        This assignment may be done in pairs. No more than two people per group are allowed. Should you choose to work in pairs, upon submission of your assignment.
        Learning outcomes 1 - 5 from the course outline.
                  Kaggle is a crowdsourcing, online platform for machine learning competitions, where companies and researchers submit problems and datasets, and the machine learning community compete to produce the best solutions. This is a perfect trainings ground for real-world problems. It is an opportunity for data scientists to develop their portfolio which they can advertise to their prospective employers, and it is also an opportunity to win prizes.
        For this project, you are going to work on a Kaggle dataset.
        You will first need to create an account with Kaggle. Then familiarise yourself with the Kaggle platform.
        Your task will be to work on a competition dataset which is currently in progress. While you will be submitting your solutions and appearing the Kaggle Leaderboard, this project will be run as an in-class competition. The problem description and the dataset can be found here https://www.kaggle.com/competitions/geology-forecast-challenge- open/overview
        Note, this dataset and the overall problem is challenging. You will be trying to solve the problem with the algorithms and approaches that we have learned so far being able to submit a new solution up to 5 times each day; however, your solutions will be constrained in terms the effectiveness of the final solutions that you can produce – but it will all be a valuable learning experience nonetheless.
        The competition is the Geology Forecast Challenge, which is a supervised classification problem where the task is to predict the type of geological material that a tunnel boring machine (TBM) will encounter ahead in the rock face.
        What is being predicted? You are predicting the rock class label (e.g. “Shale,” “Sandstone,” “Clay,” etc.), which represents the type of ground material at specific positions ahead of the tunnel boring machine.
        What does the data represent? The input features are sensor readings collected from the TBM during its operation, including measurements like thrust force, penetration rate, torque, advance rate, and more. These are time series of machine telemetry that reflect how the TBM interacts with the geological material. The labels (target values) represent ground truth rock types observed during the boring process.
        Task:
        Your work is to be done using the Jupyter Notebook (Kaggle provides a development/testing environment), which you will submit as the primary component of your work. A notebook template will be provided for you showing which information you must at least report as part of your submission.
        Your tasks are as follows:
        1. You will first need to create an account with Kaggle.
        2. Then familiarise yourself with the Kaggle platform.
        3. Familiarise yourself with the submission/testing process.
        4. Download the datasets, then explore and perform thorough EDA.
        5. Devise an experimental plan for how you intend to empirically arrive at the most accurate solution.
        6. Explore the accuracy of kNN for solving the problem and use the scores from your kNN for the class
        competition.
        7. Explore scikit-learn (or other libraries) and employ a suite of different machine learning algorithms not yet
              covered in class and benchmark against kNN performances.
        1

         158.755-2025 Semester 1 Massey University
        8. Investigate which subsets of features are effective, then build solutions based on this analysis and reasoning.
        9. Devise solutions to these machine learning problems that are creative, innovative and effective. Since much of
        machine learning is trial and error, you are asked to continue refine and incrementally improve your solution. Keep track of all the different strategies you have used, how they have performed, and how your accuracy has improved/deteriorated with different strategies. Provide also your reasoning for trying strategies and approaches. Remember, you can submit up to four solutions to Kaggle per day. Keep track of your performance and consider even graphing them.
        10. Take a screenshot of your final and best submission score and standing on the Kaggle leader-board for both competitions and save that as a jpg file. Then embed this jpg screenshots into your Notebooks, and record your submission scores on the class Google Sheet (to be made available on Stream) where the class leader-boards will be kept.
        11. If you are working in pairs, you must explain in the notebook at the in in the Appendix, what was the contribution that each person made to the project.
        The Kaggle platforms and the community of data scientists provide considerable help in the form of ‘kernels’, which are often Python Notebooks and can help you with getting started. There are also discussion fora which can offer help and ideas on how to go about in solving problems. Copying code from this resource is not acceptable for this assignment. Doing so can be regarded as plagiarism, and can be followed with disciplinary action.
        Marking criteria:
        Marks will be awarded for different components of the project using the following rubric:
        Component Marks Requirements and expectations
               EDA
            5
           - Breadth: summary stats, class balance, missing‐value and outlier checks, chainage/time trends.
        - Visuals: histograms, boxplots, correlation heatmaps, time‐series etc.
        - Preparation: imputation or removal of missing data, outlier treatment,
        clear rationale where needed.
        - Narrative: concise markdown explaining findings and guiding the
        modeling choices.
          kNN classification
          30
         - Baseline & Tuning: various values of k and different distance metrics must be benchmarked; report CV mean ± std and final test accuracy and the custom metric used in the competition.
        - Leakage Control: ensure no data leakage happens.
        - Presentation: table of results (e.g. k vs. accuracy/suitable metric), e.g. plot
        of accuracy vs. k, and confusion matrix if appropriate.
        - Interpretation: discuss under-/over-fitting as k varies, and justify your
        chosen k.
        - Leaderboard: only these k-NN results go into the class Google
        Sheet.
           Classification Modeling (Other Algos)
           25
          - Model Diversity: at least three algorithm families (e.g. tree-based, linear, kernel); brief rationale for each.
        - Tuning: grid or randomized search with CV; report best hyperparameters.
        - Comparison Table: side-by-side metrics (accuracy, precision/recall
        macro-avg, train time).
        - Interpretation: which outperform k-NN and why.
        - Note: these results inform your analysis and acquire scores for this
        component only but are not entered into the class leaderboard.
          Analysis
            20
           - Design Clarity: presentation and design of all your experiments
        - Cross-Validation: choice of testing strategies of all your experiments
        - Feature Selection: robustness in feature analysis and selection
        - Engineered Features: at least one new feature with before/after
        performance across all your experiments.
        - Data-Leakage Prevention: explicit note on where and how you guard
        against leakage.
         2

         158.755-2025 Semester 1
        Massey University
            Kaggle submission score
        20
        Successful submission of predictions to Kaggle, listing of the score on the class leader-board and position on the class leader-board based ONLY ON THE kNN models.
        The winning student will receive full marks. The next best student will receive 17 marks, and every subsequent placing will receive one less point, with the minimum being 10 marks for a successful submission.
        An interim solution must be submitted by May 1 and the class leader board document (this Google Sheet link is below) must be updated. This will constitute 10 marks. If this is not completed by this date, then 10 marks will be deducted from the submission score. For this, you must submit a screenshot of your submission date and score.
        Use of cluster analysis for exploring the dataset.
        Bonus marks will be awarded for exceptional work in extracting additional features
        from this dataset and incorporating them into the training set, together with the comparative analysis showing whether or not they have increased predictive accuracy.
          Reading Log
            PASS
           - The compiled reading logs up to the current period.
        - The peer discussion summaries for each week.
        - Any relevant connections between your readings and your analytical work
        in the notebook. If a research paper influenced how you approached an implementation, mention it.
         BONUS MARKS
        Cluster analysis Additional feature extraction
        Google Sheets link url:
        max 5 max 5
                     https://docs.google.com/spreadsheets/d/1CxgPKnIwzakbmliKiz1toatGz45HFQynaLh54RRU2lo/edit?usp=sharing
        Hand-in: Zip-up all your notebooks, any other .py files you might have written as well as jpgs of your screenshots into a single file and submit through Stream. Also submit your reading log and extract a pdf version of your notebook and submit this alongside your other files. If, and only if Stream is down, then email the solution to the lecturer.
        Guidelines for Generative AI Use on Project 3
        In professional practice, AI tools can accelerate workflows. At university, our priority is your own skill development—data intuition, experimental design, critical interpretation, and reproducible code. To support learning without undermining it, you may use generative AI only in a Planning capacity and as described below. Any other use is prohibited.
        Permitted Uses
        You may consult AI to:
        1. Clarify Concepts & Theory
        o Background on algorithms, metrics, or data-science principles.
        ▪ “How does k-NN differ from logistic regression?”
        ▪ “What are common sources of data leakage in time-series classification?”
        2. Plan & Critique Experimental Design
        o Feedback on your pipeline, methodology, or evaluation strategy—without generating
        code.
        ▪ “Does stratified vs. time-aware CV make sense for TBM data?” ▪ “What should I watch for when scaling sensor readings?”
        3. Troubleshoot & Debug
        o High-level debugging hints or explanations of error messages—provided you write and
         3

         158.755-2025 Semester 1 Massey University
        test the code yourself.
        ▪ “Why might my MinMaxScaler produce constant features?”
        ▪ “What causes a ‘ValueError: Found input variables with inconsistent numbers
        of samples’?”
        4. Explore Visualization Ideas
        o Suggestions for effective plots or comparison layouts—without copying generated code or images.
        ▪ “How best to show feature-importance rankings in a table or chart?”
        ▪ “What are clear ways to compare accuracy vs. k in k-NN?” 5. Engage Critically with Literature
        o Summaries of academic methods or alternative interpretations—integrated into your own reading log.
        ▪ “What are alternatives to ANOVA F-tests for univariate feature selection?” ▪ “How do researchers validate time-series classifiers in engineering?”
        Prohibited Uses You must not:
        • Paste AI-generated code or snippets directly into your notebook.
        • Prompt AI to solve assignment tasks step-by-step.
        • Paraphrase AI outputs as your own original work.
        • Submit AI-generated analyses, interpretations, or visualizations without substantial
        independent development.
        If you have any questions or concerns about this assignment, please ask the lecturer sooner rather than closer to the submission deadline.


        請加QQ:99515681  郵箱:99515681@qq.com   WX:codinghelp

        掃一掃在手機(jī)打開當(dāng)前頁
      1. 上一篇:代做 ECE391、代寫 Python/java 程序語言
      2. 下一篇:代做 MATH2052編程、代寫 MATH2052設(shè)計程序
      3. 無相關(guān)信息
        合肥生活資訊

        合肥圖文信息
        急尋熱仿真分析?代做熱仿真服務(wù)+熱設(shè)計優(yōu)化
        急尋熱仿真分析?代做熱仿真服務(wù)+熱設(shè)計優(yōu)化
        出評 開團(tuán)工具
        出評 開團(tuán)工具
        挖掘機(jī)濾芯提升發(fā)動機(jī)性能
        挖掘機(jī)濾芯提升發(fā)動機(jī)性能
        海信羅馬假日洗衣機(jī)亮相AWE  復(fù)古美學(xué)與現(xiàn)代科技完美結(jié)合
        海信羅馬假日洗衣機(jī)亮相AWE 復(fù)古美學(xué)與現(xiàn)代
        合肥機(jī)場巴士4號線
        合肥機(jī)場巴士4號線
        合肥機(jī)場巴士3號線
        合肥機(jī)場巴士3號線
        合肥機(jī)場巴士2號線
        合肥機(jī)場巴士2號線
        合肥機(jī)場巴士1號線
        合肥機(jī)場巴士1號線
      4. 短信驗證碼 酒店vi設(shè)計 NBA直播 幣安下載

        關(guān)于我們 | 打賞支持 | 廣告服務(wù) | 聯(lián)系我們 | 網(wǎng)站地圖 | 免責(zé)聲明 | 幫助中心 | 友情鏈接 |

        Copyright © 2025 hfw.cc Inc. All Rights Reserved. 合肥網(wǎng) 版權(quán)所有
        ICP備06013414號-3 公安備 42010502001045

        亚洲精品美女久久久久| 亚洲精品乱码久久久久久蜜桃不卡 | 国产精品毛片AV久久66| 亚洲依依成人精品| 久久99九九99九九精品| 亚洲中文久久精品无码| 精品久久久久久无码免费| 九九热视频精品在线| 日韩在线观看高清视频| 亚洲日韩中文在线精品第一 | 亚洲日韩小电影在线观看| 国产精品自产拍2021在线观看 | 亚洲精品第一国产综合精品99| 欧美日韩亚洲精品| 日韩国产精品视频| 日韩经典午夜福利发布| 日韩亚洲变态另类中文| 日韩在线中文字幕制服丝袜| 国产成人精品影院狼色在线| 国产精品免费视频网站| 国产精品国产香蕉在线观看网| 精品久久免费视频| 精品香蕉久久久午夜福利| 国产精品乱码久久久久久软件| 国产精品无码久久综合网| 好吊妞这里有精品| 网曝门精品国产事件在线观看 | 精品国产福利盛宴在线观看| 国产精品1024在线永久免费| 国产精品福利在线观看免费不卡| 精品国产91久久久久久久a| 久久久久99精品成人片| 国色精品va在线观看免费视频| 亚洲精品制服丝袜四区| 日本人精品video黑人| 久热re这里只有精品视频| 久久66热这里只会有精品| 99国产精品自在自在久久| 91精品免费久久久久久久久| 国产香蕉免费精品视频| 伊人久99久女女视频精品免|