合肥生活安徽新聞合肥交通合肥房產生活服務合肥教育合肥招聘合肥旅游文化藝術合肥美食合肥地圖合肥社保合肥醫院企業服務合肥法律

        MSE 5760代做、代寫C/C++,Java程序
        MSE 5760代做、代寫C/C++,Java程序

        時間:2025-05-06  來源:合肥網hfw.cc  作者:hfw.cc 我要糾錯



        MSE 5760: Spring 2025 HW 6 (due 05/04/25)
        Topic: Autoencoders (AE) and Variational Autoencoders (VAE)
        Background:
        In this final homework, you will build a deep autoencoder, a convolutional 
        autoencoder and a denoising autoencoder to reconstruct images of an isotropic composite 
        with different volume fractions of fibers distributed in the matrix. Five different volume 
        fraction of fibers are represented in the dataset and these form five different class labels for 
        the composites. After the initial practice with AEs and reconstruction of images using latent 
        vectors, you will build a VAE to examine the same dataset. After training the VAE (as best 
        as you can using the free colab resources to reproduces images), you will use it to generate 
        new images by randomly sampling datapoints from the learned probability distribution of 
        the data in latent space. Finally, you will build a conditional VAE to not only generate new 
        images but generate them for arbitrary volume fractions of fibers in the composite.
        The entire dataset containing 10,000 images of composites with five classes of 
        volume fractions of fibers was built by Zequn He (currently a Ph.D. student in MEAM in 
        Prof. Celia Reina’s group who helped put together this course in Summer 2022 by designing 
        all the labs and homework sets). Each image in the dataset shows three fibers of different 
        volumes with circular cross sections. Periodic boundary conditions were used to generate 
        the images. Hence, in some images, the three fiber particles may appear broken up into
        more than three pieces. The total cross sectional area of all the fibers in each image can, 
        however, be divided equally among three fibers. Please do not use this dataset for other 
        work or share it on data portals without prior permission from Zequn He
        (hezequn@seas.upenn.edu).
        Due to the large demands on memory and the intricacies of the AE-VAE 
        architecture, the results obtained will not be of the same level of accuracy and quality that 
        was possible in the previous homework sets. No train/test split is recommended as all 
        10,000 images are used for training purposes. You may, however, carry out further analysis 
        using train/test split or by tuning the hyperparameters or changing the architecture for 
        bonus points. The maximum bonus points awarded for this homework will be 5.
        **********************************Please Note****************************
        Sample codes for building the AE, VAE and a conditional GAN were provided in 
        Lab 6. There is no separate notebook provided for the homework and students will 
        have to prepare one. Tensorflow and keras were used in Lab 6 and is recommended 
        for this homework. You are welcome to use other libraries such as pytorch.
        ************************************************************************
        1. Model 1: Deep Autoencoder model (20 points)
        Import the needed libraries. Load the original dataset from canvas. Check the 
        dimensions of each loaded image for consistency. Scale the images.
        1.1 Print the class labels and the number of images in each class. Print the shape of 
        the input tensor representing images and the shape of the vector representing the 
        class labels. (2 points)
        1.1. A measure equivalent to the volume fraction of fibers in each composite image is 
        the mean pixel value of the image. As the images are of low-resolution, you may 
        notice a slight discrepancy in the assigned class value of the image and the 
        calculated mean pixel intensity. As the resolution of images increases, there will be 
        negligible difference between the assigned class label and the pixel mean of the 
        image. Henceforth, we shall use the pixel mean (PM) intensity of the images to be 
        the class label. Print a representative sample of ten images showing the volume 
        fraction of fibers in the composite along with the PM value of the image. (3 points)
        1.2. Build the following deep AE using the latent dimension value = 64.
        (a) Let the first layer of the encoder have 256 neurons.
        (b) Let the second layer of the encoder have 128 neurons.
        (c) Let the last layer of the encoder be the context or latent vector.
        (d) Use ReLU for the activation function in all of the above layers.
        (e) Build a deep decoder with its input being the context layer of the encoder.
        (f) Build two more layers of the decoder with 128 and 256 neurons, respectively. 
        These two layers can use the ReLU activation function.
        (g) Build the final layer of the decoder such that its output is compatible with the 
        reconstruction of the original input shape tensor. Use sigmoid activation for the 
        final output layer of the decoder.
        (h) Use ADAM as your optimizer and a standard learning rate. Let the loss be the 
        mean square error loss. Compile the AE and train it for at least 50 epochs.
        (10 points)
        1.3. Print the summary of the encoder and decoder blocks showing the output shape of 
        each layer along with the number of parameters that need to be trained. Monitor 
        and print the lossfor each epoch. Plot the loss as a function of the epochs. (2 points)
        1.4. Plot the first ten reconstructed images showing both the original and reconstructed 
        images. (3 points)
        2. Model 2: Convolutional Autoencoder model (20 points)
        2.1 Build the following convolutional AE with the latent dimension = 64
        (a) In the first convolution block of the encoder, use 8 filters with 3x3 kernels, 
        ReLU activation and zero padding. Apply max pooling layer with a kernel of 
        size 2.
        (b) In the second convolution block use 16 filters with 3x3 kernels, ReLU activation 
        and zero padding. Apply max pooling layer with a kernel of size 2.
        (c) In the third layer of the encoder use 32 filters with 3x3 kernels, ReLU activation 
        and zero padding. Apply max pooling layer with a kernel of size 2.
        (d) Flatten the obtained feature map and then use a Dense layer with ReLU 
        activation function to extract the latent variables.
        (d) Build the decoder in the reverse order of the encoder filters with the latent 
        output layer of the encoder serving as the input to the decoder part.
        (e) Use ADAM as your optimizer and a standard learning rate. Let the loss be the 
        mean square error loss. Compile the convolutional AE and train it for at least 
        50 epochs.
        (10 points)
        2.2 Print the summary of the encoder and decoder blocks showing the output shape of 
        each layer along with the number of parameters that need to be trained. Monitor 
        and print the lossfor each epoch. Plot the loss as a function of the epochs. (5 points)
        2.3 Plot the first ten reconstructed images showing both the original and reconstructed 
        images. (5 points)
        3. Model 3: Denoising convolutional Autoencoder model (15 points)
        3.1 Add a Gaussian noise to each image. Choose a Gaussian with a mean of zero and a 
        small standard deviation, typically ~ 0.2. Plot a sample of five original images with 
        noise. (3 points)
        3.2 Use the same convolutional autoencoder as in Problem 2 but with noisy images fed 
        to the encoder. Train and display all the information as in 2.2 and 2.3.
        (12 points)
        4. Model 4: Variational Autoencoder model (25 points)
        4.1 Set the latent dimension of the VAE be 64. Build a convolutional autoencoder with 
        the following architecture. Set the first block to have 32 filters, 3x3 kernels with 
        stride = 2 and zero padding.
        4.2 Build the second block with 64 filters, 3x3 kernels, stride =2 and zero padding. Use 
        ReLU in both blocks. Apply max pooling layer with kernel of size 2x2.
        4.3 Build an appropriate output layer of the encoder that captures the latent space 
        probability distribution.
        4.4 Define the reparametrized mean and variance of this distribution.
        4.5 Build the convolutional decoder in reverse order. Apply the same kernels, stride 
        and padding as in the encoder above. Choose the output layer of the decoder and 
        apply the appropriate activation function.
        4.6 Compile and train the model. Monitor the reconstruction loss, Kullback-Liebler 
        loss and the total loss. Plot all three quantities for 500 epochs. (10 points)
        4.7 Plot the first ten reconstructed images along with their originals. (5 points)
        4.8 Generate ten random latent variables from a standard Gaussian with mean zero and 
        unit variance. Display the generated images from these random values of the latent 
        vector. Comment on the quality of your results and how it may differ from the input 
        images. Mention at least one improvement that can be implemented which may 
        improve the results. (3+3+4=10 points)
        5. Model 5: Conditional Variational Autoencoder model (20 points)
        A conditional VAE differs from a VAE by allowing for an extra input 
        variable to both the encoder and the decoder as shown below. The extra label could 
        be a class label, ‘c’ for each image. This extra label will enable one to infer the 
        conditional probability that describes the latent vector conditioned on the class label 
        ‘c’ of the input. In VAE, using the variational inference principle, one infers the 
        Gaussian distribution (by learning its mean and variance) of the latent vector 
        representing each input ‘x’. In conditional VAE, one infers the Gaussian 
        conditional distribution of the latent vector conditioned on the extra input variable 
        ‘label’.
        For the dataset used in this homework, there are two advantages of the 
        conditional VAE compared to the VAE: (i) the conditional VAE provides a cheap
        way to validate the model by comparing the pixel mean of the generated images 
        with the conditional class label values (pixel mean) in latent space used to generate 
        the images. (ii) The trained conditional VAE can be used to generate images of 
        composites with arbitrary volume fraction of fibers with sufficient confidence once 
        the validation is done satisfactorily.
        A conditional VAE. (source: https://ijdykeman.github.io/ml/2016/12/21/cvae.html)
        A good explanation of the conditional VAE in addition to the resource cited in the 
        figure above is this: https://agustinus.kristia.de/techblog/2016/12/17/conditional vae/.
        A conditional GAN (cGAN) toy problem was shown in Lab 6 where the volume 
        fraction (replaced by pixel mean for cheaper model validation) was the design 
        parameter, and thus, the condition input into the cGAN. In this question, you will 
        build a conditional VAE for the same task of generating new images of composites 
        as in Problem 4 by randomly choosing points in the latent space. Since each point 
        in the latent space represents a conditional Gaussian distribution, it also has a class 
        label. Therefore, it becomes possible to calculate the pixel mean of a generated 
        image and compare it with the target ‘c’ value of the random point in latent space. 
        It is recommended that students familiarize themselves with the code for providing 
        the input to the cGAN with class labels and follow similar logic for building the 
        conditional VAE. You may also seek help from the TA’s if necessary.
        5.1 Create an array that contains both images and labels (the pixel mean of each image). 
        Note the label here is the condition and it should be stored in the additional channel 
        of each image.
        5.2 Use the same structure, activation functions and optimizer as the one used to build 
        the VAE in Problem 4. Print the summary of the encoder and decoder blocks 
        showing the output shape of each layer along with the number of parameters that 
        need to be trained. (5 points)
        5.3 Train the cVAE for 500 epochs. Plot the reconstruction loss, Kullback-Liebler loss 
        and the total loss. Plot the first ten reconstructed images along with their originals. 
        Include values of the pixel mean for both sets of images. (5 points)
        5.4 Generate 10 fake conditions (i.e., ten volume fractions represented as pixel means 
        evenly spaced within the range 0.1 to 0.4 as used in Lab 6) for image generation. 
        Print the shape of the generated latent variable. Print the target volume fraction (or 
        pixel mean). Show the shape of the array that combines the latent variables and fake 
        conditions. Print the shape of the generated image tensor. (2 points)
        5.5 Plot the 10 generated images. For each image show the generated condition (the 
        pixel mean of each image generated in 5.4) and the pixel mean calculated from the 
        image itself. (3 points)
        5.6 Compare the set of generated images from the conditional VAE with the ones 
        obtained in Lab 6 using cGAN. Comment on their differences and analyze the 
        possible causes for the differences. (5 points)

        請加QQ:99515681  郵箱:99515681@qq.com   WX:codinghelp




         

        掃一掃在手機打開當前頁
      1. 上一篇:代做 EEB 504B、代寫 java/Python 程序
      2. 下一篇:COMP1117B代做、代寫Python程序設計
      3. ·代做CAP 4611、代寫C/C++,Java程序
      4. ·代做ISYS1001、代寫C++,Java程序
      5. ·代做COMP2221、代寫Java程序設計
      6. ·代寫MATH3030、代做c/c++,Java程序
      7. ·COMP 5076代寫、代做Python/Java程序
      8. ·代寫COP3503、代做Java程序設計
      9. ·COMP3340代做、代寫Python/Java程序
      10. ·COM1008代做、代寫Java程序設計
      11. ·MATH1053代做、Python/Java程序設計代寫
      12. ·CS209A代做、Java程序設計代寫
      13. 合肥生活資訊

        合肥圖文信息
        急尋熱仿真分析?代做熱仿真服務+熱設計優化
        急尋熱仿真分析?代做熱仿真服務+熱設計優化
        出評 開團工具
        出評 開團工具
        挖掘機濾芯提升發動機性能
        挖掘機濾芯提升發動機性能
        海信羅馬假日洗衣機亮相AWE  復古美學與現代科技完美結合
        海信羅馬假日洗衣機亮相AWE 復古美學與現代
        合肥機場巴士4號線
        合肥機場巴士4號線
        合肥機場巴士3號線
        合肥機場巴士3號線
        合肥機場巴士2號線
        合肥機場巴士2號線
        合肥機場巴士1號線
        合肥機場巴士1號線
      14. 短信驗證碼 酒店vi設計 NBA直播 幣安下載

        關于我們 | 打賞支持 | 廣告服務 | 聯系我們 | 網站地圖 | 免責聲明 | 幫助中心 | 友情鏈接 |

        Copyright © 2025 hfw.cc Inc. All Rights Reserved. 合肥網 版權所有
        ICP備06013414號-3 公安備 42010502001045

        中文字幕一区二区三区日韩精品| 国产成人午夜精品影院游乐网| 国产精品综合专区中文字幕免费播放| 亚洲AV日韩AV永久无码久久 | 9久9久女女免费精品视频在线观看| 日韩精品无码中文字幕一区二区 | 久久亚洲AV无码精品色午夜麻豆| 亚洲av无码精品网站| 久久久久人妻精品一区三寸蜜桃| 无码日韩精品一区二区人妻 | 98精品国产高清在线看入口| 国产精品久久久亚洲| 亚洲精品自产拍在线观看| 成人伊人精品色XXXX视频 | 97精品人妻系列无码人妻| 亚洲一区二区三区精品视频| 久久精品蜜芽亚洲国产AV| 99精品视频在线观看免费播放| 久久99精品福利久久久| 亚洲高清国产AV拍精品青青草原 | 91情国产l精品国产亚洲区| 香蕉久久夜色精品升级完成| 久久无码专区国产精品s| 亚洲国产精品无码专区影院| 久久久91精品国产一区二区三区| 国产在线精品一区二区不卡| 国产在线精品一区二区不卡| 久久国产乱子精品免费女| 久久久九九有精品国产| 日本精品在线视频| 国产精品久久午夜夜伦鲁鲁| 久久国产精品范冰啊| 爽爽精品dvd蜜桃成熟时电影院| 久久99精品久久久久久首页| 无码囯产精品一区二区免费| 99精品视频免费在线观看| 精品人妻大屁股白浆无码| 日本亚洲精品色婷婷在线影院 | 亚洲精品456播放| 国产精品小黄鸭一区二区三区| 国产日韩高清三级精品人成|